kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests
@ 2011-09-22 16:29 Lukas Doktor
  2011-09-22 16:29 ` [PATCH 1/2] cgroup: cgroup_common.py bugfixies and modifications Lukas Doktor
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Lukas Doktor @ 2011-09-22 16:29 UTC (permalink / raw)
  To: autotest, kvm, kvm-autotest, akong, lmr, ldoktor, jzupka

Hi guys,

Do you remember the discussion about cgroup testing in autotest vs. LTP? I hope there won't be any doubts about this one as ground_test (+ first 2 subtests) are strictly focused on cgroups features enforced on KVM guest systems. Also more subtests will follow if you approve the test structure (blkio_throttle, memory, cpus...).

No matter whether we drop or keep the general 'cgroup' test. The 'cgroup_common.py' library can be imported either from 'client/tests/cgroup/' directory or directly from 'client/tests/kvm/tests/' directory.

The modifications of 'cgroup_common.py' library is backward compatible with general cgroup test.

See the commits for details.

Regards,
Lukáš Doktor


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/2] cgroup: cgroup_common.py bugfixies and modifications
  2011-09-22 16:29 [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests Lukas Doktor
@ 2011-09-22 16:29 ` Lukas Doktor
  2011-09-23  8:54   ` Jiri Zupka
  2011-09-22 16:29 ` [PATCH 2/2] adds cgroup tests on KVM guests with first test Lukas Doktor
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Lukas Doktor @ 2011-09-22 16:29 UTC (permalink / raw)
  To: autotest, kvm, kvm-autotest, akong, lmr, ldoktor, jzupka

[FIX] incorrect prop/dir variable usage
[MOD] Use __del__() instead of cleanup - Simplifies the code with small drawback (failures can't be handled. Anyway, they are not critical and were never handled before...)

Signed-off-by: Lukas Doktor <ldoktor@redhat.com>
---
 client/tests/cgroup/cgroup_common.py |   41 +++++++++++++++++++++++++++++-----
 1 files changed, 35 insertions(+), 6 deletions(-)

diff --git a/client/tests/cgroup/cgroup_common.py b/client/tests/cgroup/cgroup_common.py
index 836a23e..2a95c76 100755
--- a/client/tests/cgroup/cgroup_common.py
+++ b/client/tests/cgroup/cgroup_common.py
@@ -25,8 +25,20 @@ class Cgroup(object):
         self.module = module
         self._client = _client
         self.root = None
+        self.cgroups = []
 
 
+    def __del__(self):
+        """
+        Destructor
+        """
+        self.cgroups.sort(reverse=True)
+        for pwd in self.cgroups[:]:
+            for task in self.get_property("tasks", pwd):
+                if task:
+                    self.set_root_cgroup(int(task))
+            self.rm_cgroup(pwd)
+
     def initialize(self, modules):
         """
         Initializes object for use.
@@ -57,6 +69,7 @@ class Cgroup(object):
         except Exception, inst:
             logging.error("cg.mk_cgroup(): %s" , inst)
             return None
+        self.cgroups.append(pwd)
         return pwd
 
 
@@ -70,6 +83,10 @@ class Cgroup(object):
         """
         try:
             os.rmdir(pwd)
+            self.cgroups.remove(pwd)
+        except ValueError:
+            logging.warn("cg.rm_cgroup(): Removed cgroup which wasn't created"
+                         "using this Cgroup")
         except Exception, inst:
             if not supress:
                 logging.error("cg.rm_cgroup(): %s" , inst)
@@ -329,6 +346,22 @@ class CgroupModules(object):
         self.modules.append([])
         self.mountdir = mkdtemp(prefix='cgroup-') + '/'
 
+    def __del__(self):
+        """
+        Unmount all cgroups and remove the mountdir
+        """
+        for i in range(len(self.modules[0])):
+            if self.modules[2][i]:
+                try:
+                    os.system('umount %s -l' % self.modules[1][i])
+                except:
+                    logging.warn("CGM: Couldn't unmount %s directory"
+                                 % self.modules[1][i])
+        try:
+            os.system('rm -rf %s' % self.mountdir)
+        except:
+            logging.warn("CGM: Couldn't remove the %s directory"
+                         % self.mountdir)
 
     def init(self, _modules):
         """
@@ -376,13 +409,9 @@ class CgroupModules(object):
 
     def cleanup(self):
         """
-        Unmount all cgroups and remove the mountdir.
+        Kept for compatibility
         """
-        for i in range(len(self.modules[0])):
-            if self.modules[2][i]:
-                utils.system('umount %s -l' % self.modules[1][i],
-                             ignore_status=True)
-        shutil.rmtree(self.mountdir)
+        pass
 
 
     def get_pwd(self, module):
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/2] adds cgroup tests on KVM guests with first test
  2011-09-22 16:29 [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests Lukas Doktor
  2011-09-22 16:29 ` [PATCH 1/2] cgroup: cgroup_common.py bugfixies and modifications Lukas Doktor
@ 2011-09-22 16:29 ` Lukas Doktor
  2011-09-23  8:54   ` Jiri Zupka
  2011-11-03  6:04   ` Suqin
  2011-09-23 13:36 ` [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests Lucas Meneghel Rodrigues
  2011-09-27 18:23 ` Lucas Meneghel Rodrigues
  3 siblings, 2 replies; 11+ messages in thread
From: Lukas Doktor @ 2011-09-22 16:29 UTC (permalink / raw)
  To: autotest, kvm, kvm-autotest, akong, lmr, ldoktor, jzupka

basic structure:
 * similar to general client/tests/cgroup/ test (imports from the
   cgroup_common.py)
 * uses classes for better handling
 * improved logging and error handling
 * checks/repair the guests after each subtest
 * subtest mapping is specified in test dictionary in cgroup.py
 * allows to specify tests/repetions in tests_base.cfg
            (cgroup_tests = "re1[:loops] re2[:loops] ...")

TestBlkioBandwidthWeight{Read,Write}:
 * Two similar tests for blkio.weight functionality inside the guest using
   direct io and virtio_blk driver
 * Function:
 1) On 2 VMs adds small (10MB) virtio_blk disk
 2) Assigns each to different cgroup and sets blkio.weight 100/1000
 3) Runs dd with flag=direct (read/write) from the virtio_blk disk
    repeatidly
 4) After 1 minute checks the results. If the ratio is better then 1:3,
    test passes

Signed-off-by: Lukas Doktor <ldoktor@redhat.com>
---
 client/tests/kvm/subtests.cfg.sample |    7 +
 client/tests/kvm/tests/cgroup.py     |  316 ++++++++++++++++++++++++++++++++++
 2 files changed, 323 insertions(+), 0 deletions(-)
 create mode 100644 client/tests/cgroup/__init__.py
 create mode 100644 client/tests/kvm/tests/cgroup.py

diff --git a/client/tests/cgroup/__init__.py b/client/tests/cgroup/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/client/tests/kvm/subtests.cfg.sample b/client/tests/kvm/subtests.cfg.sample
index 74e550b..79e0656 100644
--- a/client/tests/kvm/subtests.cfg.sample
+++ b/client/tests/kvm/subtests.cfg.sample
@@ -848,6 +848,13 @@ variants:
         only Linux
         type = iofuzz
 
+    - cgroup:
+        type = cgroup
+        # cgroup_tests = "re1[:loops] re2[:loops] ..."
+        cgroup_tests = ".*:1"
+        vms += " vm2"
+        extra_params += " -snapshot"
+
     - virtio_console: install setup image_copy unattended_install.cdrom
         only Linux
         vms = ''
diff --git a/client/tests/kvm/tests/cgroup.py b/client/tests/kvm/tests/cgroup.py
new file mode 100644
index 0000000..4d0ec43
--- /dev/null
+++ b/client/tests/kvm/tests/cgroup.py
@@ -0,0 +1,316 @@
+"""
+cgroup autotest test (on KVM guest)
+@author: Lukas Doktor <ldoktor@redhat.com>
+@copyright: 2011 Red Hat, Inc.
+"""
+import logging, re, sys, tempfile, time, traceback
+from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils
+from autotest_lib.client.tests.cgroup.cgroup_common import Cgroup, CgroupModules
+
+def run_cgroup(test, params, env):
+    """
+    Tests the cgroup functions on KVM guests.
+     * Uses variable tests (marked by TODO comment) to map the subtests
+    """
+    vms = None
+    tests = None
+
+    # Tests
+    class _TestBlkioBandwidth:
+        """
+        BlkioBandwidth dummy test
+         * Use it as a base class to an actual test!
+         * self.dd_cmd and attr '_set_properties' have to be implemented
+         * It prepares 2 vms and run self.dd_cmd to simultaniously stress the
+            machines. After 1 minute it kills the dd and gather the throughput
+            informations.
+        """
+        def __init__(self, vms, modules):
+            """
+            Initialization
+            @param vms: list of vms
+            @param modules: initialized cgroup module class
+            """
+            self.vms = vms      # Virt machines
+            self.modules = modules          # cgroup module handler
+            self.blkio = Cgroup('blkio', '')    # cgroup blkio handler
+            self.files = []     # Temporary files (files of virt disks)
+            self.devices = []   # Temporary virt devices (PCI drive 1 per vm)
+            self.dd_cmd = None  # DD command used to test the throughput
+
+        def cleanup(self):
+            """
+            Cleanup
+            """
+            err = ""
+            try:
+                for i in range (2):
+                    vms[i].monitor.cmd("pci_del %s" % self.devices[i])
+                    self.files[i].close()
+            except Exception, inst:
+                err += "\nCan't remove PCI drive: %s" % inst
+            try:
+                del(self.blkio)
+            except Exception, inst:
+                err += "\nCan't remove Cgroup: %s" % inst
+
+            if err:
+                logging.error("Some parts of cleanup failed:%s", err)
+                raise error.TestError("Some parts of cleanup failed:%s" % err)
+
+        def init(self):
+            """
+            Initialization
+             * assigns vm1 and vm2 into cgroups and sets the properties
+             * creates a new virtio device and adds it into vms
+            """
+            if test.tagged_testname.find('virtio_blk') == -1:
+                logging.warn("You are executing non-virtio_blk test but this "
+                             "particular subtest uses manually added "
+                             "'virtio_blk' device.")
+            if not self.dd_cmd:
+                raise error.TestError("Corrupt class, aren't you trying to run "
+                                      "parent _TestBlkioBandwidth() function?")
+            if len(self.vms) < 2:
+                raise error.TestError("Test needs at least 2 vms.")
+
+            # cgroups
+            pwd = []
+            blkio = self.blkio
+            if blkio.initialize(self.modules):
+                raise error.TestError("Could not initialize blkio Cgroup")
+            for i in range(2):
+                pwd.append(blkio.mk_cgroup())
+                if pwd[i] == None:
+                    raise error.TestError("Can't create cgroup")
+                if blkio.set_cgroup(self.vms[i].get_shell_pid(), pwd[i]):
+                    raise error.TestError("Could not set cgroup")
+                # Move all existing threads into cgroup
+                for tmp in utils.system_output("ps -L --ppid=%d -o lwp"
+                                % self.vms[i].get_shell_pid()).split('\n')[1:]:
+                    if blkio.set_cgroup(int(tmp), pwd[i]):
+                        raise error.TestError("Could not set cgroup")
+            if self.blkio.set_property("blkio.weight", 100, pwd[0]):
+                raise error.TestError("Could not set blkio.weight")
+            if self.blkio.set_property("blkio.weight", 1000, pwd[1]):
+                raise error.TestError("Could not set blkio.weight")
+
+            # Add dumm drives
+            for i in range(2):
+                self.files.append(tempfile.NamedTemporaryFile(
+                                        prefix="cgroup-disk-",
+                                        suffix=".iso"))
+                utils.system("dd if=/dev/zero of=%s bs=1M count=10 &>/dev/null"
+                             % (self.files[i].name))
+                out = vms[i].monitor.cmd("pci_add auto storage file=%s,"
+                                "if=virtio,snapshot=off,cache=off"
+                                % (self.files[i].name))
+                out = re.search(r'OK domain (\d+), bus (\d+), slot (\d+), '
+                                 'function \d+', out).groups()
+                self.devices.append("%s:%s:%s" % out)
+
+
+        def run(self):
+            """
+            Actual test:
+             * executes self.dd_cmd simultanously on both vms.
+            """
+            sessions = []
+            out = []
+            sessions.append(vms[0].wait_for_login(timeout=30))
+            sessions.append(vms[1].wait_for_login(timeout=30))
+            sessions.append(vms[0].wait_for_login(timeout=30))
+            sessions.append(vms[1].wait_for_login(timeout=30))
+            sessions[0].sendline(self.dd_cmd)
+            sessions[1].sendline(self.dd_cmd)
+            time.sleep(60)
+
+            cmd = "rm -f /tmp/cgroup_lock; killall -9 dd"
+            sessions[2].sendline(cmd)
+            sessions[3].sendline(cmd)
+            re_dd = (r'(\d+) bytes \(\d+\.*\d* \w*\) copied, (\d+\.*\d*) s, '
+                      '\d+\.*\d* \w./s')
+            out = []
+            for i in range(2):
+                out.append(sessions[i].read_up_to_prompt())
+                out[i] = [int(_[0])/float(_[1])
+                            for _ in re.findall(re_dd, out[i])[1:-1]]
+                logging.debug("dd(%d) output: %s", i, out[i])
+                out[i] = [min(out[i]), sum(out[i])/len(out[i]), max(out[i]),
+                          len(out[i])]
+
+            for session in sessions:
+                session.close()
+
+            logging.debug("dd values (min,avg,max,ddloops):\nout1: %s\nout2: %s"
+                          ,out[0], out[1])
+
+            out1 = out[0][1]
+            out2 = out[1][1]
+            # In theory out1 should be 10times smaller, than out2.
+            if out1*3  > out2:
+                raise error.TestFail("dd values: %s:%s (1:%f), limit 1:2.5"
+                                     ", theoretical: 1:10"
+                                     % (out1, out2, out2/out1))
+            else:
+                logging.info("dd values: %s:%s (1:%s)", out1, out2, out2/out1)
+
+
+
+    class TestBlkioBandwidthWeigthRead(_TestBlkioBandwidth):
+        """
+        Tests the blkio.weight capability using simultanious read on 2 vms
+        """
+        def __init__(self, vms, modules):
+            """
+            Initialization
+            @param vms: list of vms
+            @param modules: initialized cgroup module class
+            """
+            _TestBlkioBandwidth.__init__(self, vms, modules)
+            self.dd_cmd = ("export FILE=$(ls /dev/vd* | tail -n 1); touch "
+                           "/tmp/cgroup_lock ; while [ -e /tmp/cgroup_lock ];"
+                           "do dd if=$FILE of=/dev/null iflag=direct bs=100K;"
+                           "done")
+
+
+    class TestBlkioBandwidthWeigthWrite(_TestBlkioBandwidth):
+        """
+        Tests the blkio.weight capability using simultanious write on 2 vms
+        """
+        def __init__(self, vms, modules):
+            """
+            Initialization
+            @param vms: list of vms
+            @param modules: initialized cgroup module class
+            """
+            _TestBlkioBandwidth.__init__(self, vms, modules)
+            self.dd_cmd = ('export FILE=$(ls /dev/vd* | tail -n 1); touch '
+                           '/tmp/cgroup_lock ; while [ -e /tmp/cgroup_lock ];'
+                           'do dd if=/dev/zero of=$FILE oflag=direct bs=100K;'
+                           'done')
+
+
+    def _check_vms(vms):
+        """
+        Checks the vitality of VM
+        @param vms: list of vm's
+        """
+        for i in range(len(vms)):
+            vms[i].verify_alive()
+            _ = vms[i].wait_for_login(timeout=60)
+            out = _.cmd_output("dmesg -c")
+            _.close()
+            del(_)
+            if out.find("BUG") != -1:
+                logging.error("BUG occured in dmesg:\n%s", out)
+                logging.warn("recreate VM(%s)", i)
+                # The vm have to be recreate to reset the qemu PCI state
+                vms[i].create()
+
+
+    # Setup
+    # TODO: Add all new tests here
+    tests = {"blkio_bandwidth_weigth_read"  : TestBlkioBandwidthWeigthRead,
+             "blkio_bandwidth_weigth_write" : TestBlkioBandwidthWeigthWrite,
+            }
+    modules = CgroupModules()
+    if (modules.init(['cpuset', 'cpu', 'cpuacct', 'memory', 'devices',
+                      'freezer', 'net_cls', 'blkio']) <= 0):
+        raise error.TestFail('Can\'t mount any cgroup modules')
+    # Add all vms
+    vms = []
+    for vm in params.get("vms", "main_vm").split():
+        vm = env.get_vm(vm)
+        vm.verify_alive()
+        timeout = int(params.get("login_timeout", 360))
+        _ = vm.wait_for_login(timeout=timeout)
+        _.close()
+        del(_)
+        vms.append(vm)
+
+
+    # Execute tests
+    results = ""
+    # cgroup_tests = "re1[:loops] re2[:loops] ... ... ..."
+    for j in params.get("cgroup_tests").split():
+        try:
+            loops = int(j[j.rfind(':')+1:])
+            j = j[:j.rfind(':')]
+        except:
+            loops = 1
+        for _loop in range(loops):
+            for i in [_ for _ in tests.keys() if re.match(j, _)]:
+                logging.info("%s: Entering the test", i)
+                try:
+                    _check_vms(vms)
+                    tst = tests[i](vms, modules)
+                    tst.init()
+                    tst.run()
+                except error.TestFail, inst:
+                    logging.error("%s: Leaving, test FAILED (TestFail): %s",
+                                  i, inst)
+                    results += "\n * %s: Test FAILED (TestFail): %s" % (i, inst)
+                    try:
+                        tst.cleanup()
+                    except Exception, inst:
+                        tmps = ""
+                        for tmp in traceback.format_exception(
+                                                sys.exc_info()[0],
+                                                sys.exc_info()[1],
+                                                sys.exc_info()[2]):
+                            tmps += "%s cleanup: %s" % (i, tmp)
+                        logging.info("%s: cleanup also failed\n%s", i, tmps)
+                except error.TestError, inst:
+                    tmps = ""
+                    for tmp in traceback.format_exception(
+                                            sys.exc_info()[0],
+                                            sys.exc_info()[1],
+                                            sys.exc_info()[2]):
+                        tmps += "%s: %s" % (i, tmp)
+                    logging.error("%s: Leaving, test FAILED (TestError): %s",
+                                  i, tmps)
+                    results += "\n * %s: Test FAILED (TestError): %s"% (i, inst)
+                    try:
+                        tst.cleanup()
+                    except Exception, inst:
+                        logging.warn("%s: cleanup also failed: %s\n", i, inst)
+                except Exception, inst:
+                    tmps = ""
+                    for tmp in traceback.format_exception(
+                                            sys.exc_info()[0],
+                                            sys.exc_info()[1],
+                                            sys.exc_info()[2]):
+                        tmps += "%s: %s" % (i, tmp)
+                    logging.error("%s: Leaving, test FAILED (Exception): %s",
+                                  i, tmps)
+                    results += "\n * %s: Test FAILED (Exception): %s"% (i, inst)
+                    try:
+                        tst.cleanup()
+                    except Exception, inst:
+                        logging.warn("%s: cleanup also failed: %s\n", i, inst)
+                else:
+                    try:
+                        tst.cleanup()
+                    except Exception, inst:
+                        tmps = ""
+                        for tmp in traceback.format_exception(
+                                                sys.exc_info()[0],
+                                                sys.exc_info()[1],
+                                                sys.exc_info()[2]):
+                            tmps += "%s cleanup: %s" % (i, tmp)
+                        logging.info("%s: Leaving, test passed but cleanup "
+                                     "FAILED\n%s", i, tmps)
+                        results += ("\n * %s: Test passed but cleanup FAILED"
+                                    % (i))
+                    else:
+                        logging.info("%s: Leaving, test PASSED", i)
+                        results += "\n * %s: Test PASSED" % (i)
+
+    logging.info("SUM: All tests finished (%d PASS / %d FAIL = %d TOTAL)%s",
+                 results.count("PASSED"), results.count("FAILED"),
+                 (results.count("PASSED")+results.count("FAILED")), results)
+    if results.count("FAILED"):
+        raise error.TestFail("Some subtests failed")
+
-- 
1.7.6


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] cgroup: cgroup_common.py bugfixies and modifications
  2011-09-22 16:29 ` [PATCH 1/2] cgroup: cgroup_common.py bugfixies and modifications Lukas Doktor
@ 2011-09-23  8:54   ` Jiri Zupka
  0 siblings, 0 replies; 11+ messages in thread
From: Jiri Zupka @ 2011-09-23  8:54 UTC (permalink / raw)
  To: autotest; +Cc: kvm, kvm-autotest, akong, lmr, Lukas Doktor

Acked-by: Jiří Župka <jzupka@redhat.com>

----- Original Message -----
> [FIX] incorrect prop/dir variable usage
> [MOD] Use __del__() instead of cleanup - Simplifies the code with
> small drawback (failures can't be handled. Anyway, they are not
> critical and were never handled before...)
> 
> Signed-off-by: Lukas Doktor <ldoktor@redhat.com>
> ---
>  client/tests/cgroup/cgroup_common.py |   41
>  +++++++++++++++++++++++++++++-----
>  1 files changed, 35 insertions(+), 6 deletions(-)
> 
> diff --git a/client/tests/cgroup/cgroup_common.py
> b/client/tests/cgroup/cgroup_common.py
> index 836a23e..2a95c76 100755
> --- a/client/tests/cgroup/cgroup_common.py
> +++ b/client/tests/cgroup/cgroup_common.py
> @@ -25,8 +25,20 @@ class Cgroup(object):
>          self.module = module
>          self._client = _client
>          self.root = None
> +        self.cgroups = []
>  
>  
> +    def __del__(self):
> +        """
> +        Destructor
> +        """
> +        self.cgroups.sort(reverse=True)
> +        for pwd in self.cgroups[:]:
> +            for task in self.get_property("tasks", pwd):
> +                if task:
> +                    self.set_root_cgroup(int(task))
> +            self.rm_cgroup(pwd)
> +
>      def initialize(self, modules):
>          """
>          Initializes object for use.
> @@ -57,6 +69,7 @@ class Cgroup(object):
>          except Exception, inst:
>              logging.error("cg.mk_cgroup(): %s" , inst)
>              return None
> +        self.cgroups.append(pwd)
>          return pwd
>  
>  
> @@ -70,6 +83,10 @@ class Cgroup(object):
>          """
>          try:
>              os.rmdir(pwd)
> +            self.cgroups.remove(pwd)
> +        except ValueError:
> +            logging.warn("cg.rm_cgroup(): Removed cgroup which
> wasn't created"
> +                         "using this Cgroup")
>          except Exception, inst:
>              if not supress:
>                  logging.error("cg.rm_cgroup(): %s" , inst)
> @@ -329,6 +346,22 @@ class CgroupModules(object):
>          self.modules.append([])
>          self.mountdir = mkdtemp(prefix='cgroup-') + '/'
>  
> +    def __del__(self):
> +        """
> +        Unmount all cgroups and remove the mountdir
> +        """
> +        for i in range(len(self.modules[0])):
> +            if self.modules[2][i]:
> +                try:
> +                    os.system('umount %s -l' % self.modules[1][i])
> +                except:
> +                    logging.warn("CGM: Couldn't unmount %s
> directory"
> +                                 % self.modules[1][i])
> +        try:
> +            os.system('rm -rf %s' % self.mountdir)
> +        except:
> +            logging.warn("CGM: Couldn't remove the %s directory"
> +                         % self.mountdir)
>  
>      def init(self, _modules):
>          """
> @@ -376,13 +409,9 @@ class CgroupModules(object):
>  
>      def cleanup(self):
>          """
> -        Unmount all cgroups and remove the mountdir.
> +        Kept for compatibility
>          """
> -        for i in range(len(self.modules[0])):
> -            if self.modules[2][i]:
> -                utils.system('umount %s -l' % self.modules[1][i],
> -                             ignore_status=True)
> -        shutil.rmtree(self.mountdir)
> +        pass
>  
>  
>      def get_pwd(self, module):
> --
> 1.7.6
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] adds cgroup tests on KVM guests with first test
  2011-09-22 16:29 ` [PATCH 2/2] adds cgroup tests on KVM guests with first test Lukas Doktor
@ 2011-09-23  8:54   ` Jiri Zupka
  2011-11-03  6:04   ` Suqin
  1 sibling, 0 replies; 11+ messages in thread
From: Jiri Zupka @ 2011-09-23  8:54 UTC (permalink / raw)
  To: autotest; +Cc: Lukas Doktor, kvm, kvm-autotest, akong, lmr

Acked-by: Jiří Župka <jzupka@redhat.com>

----- Original Message -----
> basic structure:
>  * similar to general client/tests/cgroup/ test (imports from the
>    cgroup_common.py)
>  * uses classes for better handling
>  * improved logging and error handling
>  * checks/repair the guests after each subtest
>  * subtest mapping is specified in test dictionary in cgroup.py
>  * allows to specify tests/repetions in tests_base.cfg
>             (cgroup_tests = "re1[:loops] re2[:loops] ...")
> 
> TestBlkioBandwidthWeight{Read,Write}:
>  * Two similar tests for blkio.weight functionality inside the guest
>  using
>    direct io and virtio_blk driver
>  * Function:
>  1) On 2 VMs adds small (10MB) virtio_blk disk
>  2) Assigns each to different cgroup and sets blkio.weight 100/1000
>  3) Runs dd with flag=direct (read/write) from the virtio_blk disk
>     repeatidly
>  4) After 1 minute checks the results. If the ratio is better then
>  1:3,
>     test passes
> 
> Signed-off-by: Lukas Doktor <ldoktor@redhat.com>
> ---
>  client/tests/kvm/subtests.cfg.sample |    7 +
>  client/tests/kvm/tests/cgroup.py     |  316
>  ++++++++++++++++++++++++++++++++++
>  2 files changed, 323 insertions(+), 0 deletions(-)
>  create mode 100644 client/tests/cgroup/__init__.py
>  create mode 100644 client/tests/kvm/tests/cgroup.py
> 
> diff --git a/client/tests/cgroup/__init__.py
> b/client/tests/cgroup/__init__.py
> new file mode 100644
> index 0000000..e69de29
> diff --git a/client/tests/kvm/subtests.cfg.sample
> b/client/tests/kvm/subtests.cfg.sample
> index 74e550b..79e0656 100644
> --- a/client/tests/kvm/subtests.cfg.sample
> +++ b/client/tests/kvm/subtests.cfg.sample
> @@ -848,6 +848,13 @@ variants:
>          only Linux
>          type = iofuzz
>  
> +    - cgroup:
> +        type = cgroup
> +        # cgroup_tests = "re1[:loops] re2[:loops] ..."
> +        cgroup_tests = ".*:1"
> +        vms += " vm2"
> +        extra_params += " -snapshot"
> +
>      - virtio_console: install setup image_copy
>      unattended_install.cdrom
>          only Linux
>          vms = ''
> diff --git a/client/tests/kvm/tests/cgroup.py
> b/client/tests/kvm/tests/cgroup.py
> new file mode 100644
> index 0000000..4d0ec43
> --- /dev/null
> +++ b/client/tests/kvm/tests/cgroup.py
> @@ -0,0 +1,316 @@
> +"""
> +cgroup autotest test (on KVM guest)
> +@author: Lukas Doktor <ldoktor@redhat.com>
> +@copyright: 2011 Red Hat, Inc.
> +"""
> +import logging, re, sys, tempfile, time, traceback
> +from autotest_lib.client.common_lib import error
> +from autotest_lib.client.bin import utils
> +from autotest_lib.client.tests.cgroup.cgroup_common import Cgroup,
> CgroupModules
> +
> +def run_cgroup(test, params, env):
> +    """
> +    Tests the cgroup functions on KVM guests.
> +     * Uses variable tests (marked by TODO comment) to map the
> subtests
> +    """
> +    vms = None
> +    tests = None
> +
> +    # Tests
> +    class _TestBlkioBandwidth:
> +        """
> +        BlkioBandwidth dummy test
> +         * Use it as a base class to an actual test!
> +         * self.dd_cmd and attr '_set_properties' have to be
> implemented
> +         * It prepares 2 vms and run self.dd_cmd to simultaniously
> stress the
> +            machines. After 1 minute it kills the dd and gather the
> throughput
> +            informations.
> +        """
> +        def __init__(self, vms, modules):
> +            """
> +            Initialization
> +            @param vms: list of vms
> +            @param modules: initialized cgroup module class
> +            """
> +            self.vms = vms      # Virt machines
> +            self.modules = modules          # cgroup module handler
> +            self.blkio = Cgroup('blkio', '')    # cgroup blkio
> handler
> +            self.files = []     # Temporary files (files of virt
> disks)
> +            self.devices = []   # Temporary virt devices (PCI drive
> 1 per vm)
> +            self.dd_cmd = None  # DD command used to test the
> throughput
> +
> +        def cleanup(self):
> +            """
> +            Cleanup
> +            """
> +            err = ""
> +            try:
> +                for i in range (2):
> +                    vms[i].monitor.cmd("pci_del %s" %
> self.devices[i])
> +                    self.files[i].close()
> +            except Exception, inst:
> +                err += "\nCan't remove PCI drive: %s" % inst
> +            try:
> +                del(self.blkio)
> +            except Exception, inst:
> +                err += "\nCan't remove Cgroup: %s" % inst
> +
> +            if err:
> +                logging.error("Some parts of cleanup failed:%s",
> err)
> +                raise error.TestError("Some parts of cleanup
> failed:%s" % err)
> +
> +        def init(self):
> +            """
> +            Initialization
> +             * assigns vm1 and vm2 into cgroups and sets the
> properties
> +             * creates a new virtio device and adds it into vms
> +            """
> +            if test.tagged_testname.find('virtio_blk') == -1:
> +                logging.warn("You are executing non-virtio_blk test
> but this "
> +                             "particular subtest uses manually added
> "
> +                             "'virtio_blk' device.")
> +            if not self.dd_cmd:
> +                raise error.TestError("Corrupt class, aren't you
> trying to run "
> +                                      "parent _TestBlkioBandwidth()
> function?")
> +            if len(self.vms) < 2:
> +                raise error.TestError("Test needs at least 2 vms.")
> +
> +            # cgroups
> +            pwd = []
> +            blkio = self.blkio
> +            if blkio.initialize(self.modules):
> +                raise error.TestError("Could not initialize blkio
> Cgroup")
> +            for i in range(2):
> +                pwd.append(blkio.mk_cgroup())
> +                if pwd[i] == None:
> +                    raise error.TestError("Can't create cgroup")
> +                if blkio.set_cgroup(self.vms[i].get_shell_pid(),
> pwd[i]):
> +                    raise error.TestError("Could not set cgroup")
> +                # Move all existing threads into cgroup
> +                for tmp in utils.system_output("ps -L --ppid=%d -o
> lwp"
> +                                %
> self.vms[i].get_shell_pid()).split('\n')[1:]:
> +                    if blkio.set_cgroup(int(tmp), pwd[i]):
> +                        raise error.TestError("Could not set
> cgroup")
> +            if self.blkio.set_property("blkio.weight", 100, pwd[0]):
> +                raise error.TestError("Could not set blkio.weight")
> +            if self.blkio.set_property("blkio.weight", 1000,
> pwd[1]):
> +                raise error.TestError("Could not set blkio.weight")
> +
> +            # Add dumm drives
> +            for i in range(2):
> +                self.files.append(tempfile.NamedTemporaryFile(
> +                                        prefix="cgroup-disk-",
> +                                        suffix=".iso"))
> +                utils.system("dd if=/dev/zero of=%s bs=1M count=10
> &>/dev/null"
> +                             % (self.files[i].name))
> +                out = vms[i].monitor.cmd("pci_add auto storage
> file=%s,"
> +                                "if=virtio,snapshot=off,cache=off"
> +                                % (self.files[i].name))
> +                out = re.search(r'OK domain (\d+), bus (\d+), slot
> (\d+), '
> +                                 'function \d+', out).groups()
> +                self.devices.append("%s:%s:%s" % out)
> +
> +
> +        def run(self):
> +            """
> +            Actual test:
> +             * executes self.dd_cmd simultanously on both vms.
> +            """
> +            sessions = []
> +            out = []
> +            sessions.append(vms[0].wait_for_login(timeout=30))
> +            sessions.append(vms[1].wait_for_login(timeout=30))
> +            sessions.append(vms[0].wait_for_login(timeout=30))
> +            sessions.append(vms[1].wait_for_login(timeout=30))
> +            sessions[0].sendline(self.dd_cmd)
> +            sessions[1].sendline(self.dd_cmd)
> +            time.sleep(60)
> +
> +            cmd = "rm -f /tmp/cgroup_lock; killall -9 dd"
> +            sessions[2].sendline(cmd)
> +            sessions[3].sendline(cmd)
> +            re_dd = (r'(\d+) bytes \(\d+\.*\d* \w*\) copied,
> (\d+\.*\d*) s, '
> +                      '\d+\.*\d* \w./s')
> +            out = []
> +            for i in range(2):
> +                out.append(sessions[i].read_up_to_prompt())
> +                out[i] = [int(_[0])/float(_[1])
> +                            for _ in re.findall(re_dd,
> out[i])[1:-1]]
> +                logging.debug("dd(%d) output: %s", i, out[i])
> +                out[i] = [min(out[i]), sum(out[i])/len(out[i]),
> max(out[i]),
> +                          len(out[i])]
> +
> +            for session in sessions:
> +                session.close()
> +
> +            logging.debug("dd values (min,avg,max,ddloops):\nout1:
> %s\nout2: %s"
> +                          ,out[0], out[1])
> +
> +            out1 = out[0][1]
> +            out2 = out[1][1]
> +            # In theory out1 should be 10times smaller, than out2.
> +            if out1*3  > out2:
> +                raise error.TestFail("dd values: %s:%s (1:%f), limit
> 1:2.5"
> +                                     ", theoretical: 1:10"
> +                                     % (out1, out2, out2/out1))
> +            else:
> +                logging.info("dd values: %s:%s (1:%s)", out1, out2,
> out2/out1)
> +
> +
> +
> +    class TestBlkioBandwidthWeigthRead(_TestBlkioBandwidth):
> +        """
> +        Tests the blkio.weight capability using simultanious read on
> 2 vms
> +        """
> +        def __init__(self, vms, modules):
> +            """
> +            Initialization
> +            @param vms: list of vms
> +            @param modules: initialized cgroup module class
> +            """
> +            _TestBlkioBandwidth.__init__(self, vms, modules)
> +            self.dd_cmd = ("export FILE=$(ls /dev/vd* | tail -n 1);
> touch "
> +                           "/tmp/cgroup_lock ; while [ -e
> /tmp/cgroup_lock ];"
> +                           "do dd if=$FILE of=/dev/null iflag=direct
> bs=100K;"
> +                           "done")
> +
> +
> +    class TestBlkioBandwidthWeigthWrite(_TestBlkioBandwidth):
> +        """
> +        Tests the blkio.weight capability using simultanious write
> on 2 vms
> +        """
> +        def __init__(self, vms, modules):
> +            """
> +            Initialization
> +            @param vms: list of vms
> +            @param modules: initialized cgroup module class
> +            """
> +            _TestBlkioBandwidth.__init__(self, vms, modules)
> +            self.dd_cmd = ('export FILE=$(ls /dev/vd* | tail -n 1);
> touch '
> +                           '/tmp/cgroup_lock ; while [ -e
> /tmp/cgroup_lock ];'
> +                           'do dd if=/dev/zero of=$FILE oflag=direct
> bs=100K;'
> +                           'done')
> +
> +
> +    def _check_vms(vms):
> +        """
> +        Checks the vitality of VM
> +        @param vms: list of vm's
> +        """
> +        for i in range(len(vms)):
> +            vms[i].verify_alive()
> +            _ = vms[i].wait_for_login(timeout=60)
> +            out = _.cmd_output("dmesg -c")
> +            _.close()
> +            del(_)
> +            if out.find("BUG") != -1:
> +                logging.error("BUG occured in dmesg:\n%s", out)
> +                logging.warn("recreate VM(%s)", i)
> +                # The vm have to be recreate to reset the qemu PCI
> state
> +                vms[i].create()
> +
> +
> +    # Setup
> +    # TODO: Add all new tests here
> +    tests = {"blkio_bandwidth_weigth_read"  :
> TestBlkioBandwidthWeigthRead,
> +             "blkio_bandwidth_weigth_write" :
> TestBlkioBandwidthWeigthWrite,
> +            }
> +    modules = CgroupModules()
> +    if (modules.init(['cpuset', 'cpu', 'cpuacct', 'memory',
> 'devices',
> +                      'freezer', 'net_cls', 'blkio']) <= 0):
> +        raise error.TestFail('Can\'t mount any cgroup modules')
> +    # Add all vms
> +    vms = []
> +    for vm in params.get("vms", "main_vm").split():
> +        vm = env.get_vm(vm)
> +        vm.verify_alive()
> +        timeout = int(params.get("login_timeout", 360))
> +        _ = vm.wait_for_login(timeout=timeout)
> +        _.close()
> +        del(_)
> +        vms.append(vm)
> +
> +
> +    # Execute tests
> +    results = ""
> +    # cgroup_tests = "re1[:loops] re2[:loops] ... ... ..."
> +    for j in params.get("cgroup_tests").split():
> +        try:
> +            loops = int(j[j.rfind(':')+1:])
> +            j = j[:j.rfind(':')]
> +        except:
> +            loops = 1
> +        for _loop in range(loops):
> +            for i in [_ for _ in tests.keys() if re.match(j, _)]:
> +                logging.info("%s: Entering the test", i)
> +                try:
> +                    _check_vms(vms)
> +                    tst = tests[i](vms, modules)
> +                    tst.init()
> +                    tst.run()
> +                except error.TestFail, inst:
> +                    logging.error("%s: Leaving, test FAILED
> (TestFail): %s",
> +                                  i, inst)
> +                    results += "\n * %s: Test FAILED (TestFail): %s"
> % (i, inst)
> +                    try:
> +                        tst.cleanup()
> +                    except Exception, inst:
> +                        tmps = ""
> +                        for tmp in traceback.format_exception(
> +                                                sys.exc_info()[0],
> +                                                sys.exc_info()[1],
> +                                                sys.exc_info()[2]):
> +                            tmps += "%s cleanup: %s" % (i, tmp)
> +                        logging.info("%s: cleanup also failed\n%s",
> i, tmps)
> +                except error.TestError, inst:
> +                    tmps = ""
> +                    for tmp in traceback.format_exception(
> +                                            sys.exc_info()[0],
> +                                            sys.exc_info()[1],
> +                                            sys.exc_info()[2]):
> +                        tmps += "%s: %s" % (i, tmp)
> +                    logging.error("%s: Leaving, test FAILED
> (TestError): %s",
> +                                  i, tmps)
> +                    results += "\n * %s: Test FAILED (TestError):
> %s"% (i, inst)
> +                    try:
> +                        tst.cleanup()
> +                    except Exception, inst:
> +                        logging.warn("%s: cleanup also failed:
> %s\n", i, inst)
> +                except Exception, inst:
> +                    tmps = ""
> +                    for tmp in traceback.format_exception(
> +                                            sys.exc_info()[0],
> +                                            sys.exc_info()[1],
> +                                            sys.exc_info()[2]):
> +                        tmps += "%s: %s" % (i, tmp)
> +                    logging.error("%s: Leaving, test FAILED
> (Exception): %s",
> +                                  i, tmps)
> +                    results += "\n * %s: Test FAILED (Exception):
> %s"% (i, inst)
> +                    try:
> +                        tst.cleanup()
> +                    except Exception, inst:
> +                        logging.warn("%s: cleanup also failed:
> %s\n", i, inst)
> +                else:
> +                    try:
> +                        tst.cleanup()
> +                    except Exception, inst:
> +                        tmps = ""
> +                        for tmp in traceback.format_exception(
> +                                                sys.exc_info()[0],
> +                                                sys.exc_info()[1],
> +                                                sys.exc_info()[2]):
> +                            tmps += "%s cleanup: %s" % (i, tmp)
> +                        logging.info("%s: Leaving, test passed but
> cleanup "
> +                                     "FAILED\n%s", i, tmps)
> +                        results += ("\n * %s: Test passed but
> cleanup FAILED"
> +                                    % (i))
> +                    else:
> +                        logging.info("%s: Leaving, test PASSED", i)
> +                        results += "\n * %s: Test PASSED" % (i)
> +
> +    logging.info("SUM: All tests finished (%d PASS / %d FAIL = %d
> TOTAL)%s",
> +                 results.count("PASSED"), results.count("FAILED"),
> +                 (results.count("PASSED")+results.count("FAILED")),
> results)
> +    if results.count("FAILED"):
> +        raise error.TestFail("Some subtests failed")
> +
> --
> 1.7.6
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests
  2011-09-22 16:29 [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests Lukas Doktor
  2011-09-22 16:29 ` [PATCH 1/2] cgroup: cgroup_common.py bugfixies and modifications Lukas Doktor
  2011-09-22 16:29 ` [PATCH 2/2] adds cgroup tests on KVM guests with first test Lukas Doktor
@ 2011-09-23 13:36 ` Lucas Meneghel Rodrigues
  2011-09-23 15:02   ` Lukáš Doktor
  2011-09-27 18:23 ` Lucas Meneghel Rodrigues
  3 siblings, 1 reply; 11+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-09-23 13:36 UTC (permalink / raw)
  To: Lukas Doktor; +Cc: autotest, kvm, kvm-autotest, akong, jzupka

On 09/22/2011 01:29 PM, Lukas Doktor wrote:
> Hi guys,
>
> Do you remember the discussion about cgroup testing in autotest vs. LTP? I hope there won't be any doubts about this one as ground_test (+ first 2 subtests) are strictly focused on cgroups features enforced on KVM guest systems. Also more subtests will follow if you approve the test structure (blkio_throttle, memory, cpus...).

Yes, absolutely.

>
> No matter whether we drop or keep the general 'cgroup' test. The 'cgroup_common.py' library can be imported either from 'client/tests/cgroup/' directory or directly from 'client/tests/kvm/tests/' directory.

I don't think we really need to drop the test. It's useful anyway, even 
though there are LTP tests that sort of cover ir.

>
> The modifications of 'cgroup_common.py' library is backward compatible with general cgroup test.
>
> See the commits for details.

Now that we moved to github, I'd like to go with the following model of 
contribution:

1) You create a user on github if you don't have one
2) Create a public autotest fork
3) Commit the changes to a topic branch appropriately named
4) Make a pull request to autotest:master
5) You still send the patches to the mailing list normally, but mention 
the pull request URL on the message.

That's it, we are still trying out things, so if this doesn't work out, 
we'll update the process. Is it possible that you do that and rebase 
your patches?

Oh, and since patchwork is still out due to DNS outage, could you guys 
re-spin your client-server patches using the same process I mentioned? 
Thank you!

Lucas

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests
  2011-09-23 13:36 ` [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests Lucas Meneghel Rodrigues
@ 2011-09-23 15:02   ` Lukáš Doktor
  2011-09-23 17:41     ` Lucas Meneghel Rodrigues
  0 siblings, 1 reply; 11+ messages in thread
From: Lukáš Doktor @ 2011-09-23 15:02 UTC (permalink / raw)
  To: Lucas Meneghel Rodrigues; +Cc: autotest, kvm, kvm-autotest, akong, jzupka

Dne 23.9.2011 15:36, Lucas Meneghel Rodrigues napsal(a):
> On 09/22/2011 01:29 PM, Lukas Doktor wrote:
>> Hi guys,
>>
>> Do you remember the discussion about cgroup testing in autotest vs. 
>> LTP? I hope there won't be any doubts about this one as ground_test 
>> (+ first 2 subtests) are strictly focused on cgroups features 
>> enforced on KVM guest systems. Also more subtests will follow if you 
>> approve the test structure (blkio_throttle, memory, cpus...).
>
> Yes, absolutely.
>
>>
>> No matter whether we drop or keep the general 'cgroup' test. The 
>> 'cgroup_common.py' library can be imported either from 
>> 'client/tests/cgroup/' directory or directly from 
>> 'client/tests/kvm/tests/' directory.
>
> I don't think we really need to drop the test. It's useful anyway, 
> even though there are LTP tests that sort of cover ir.
Well I have some other ones in a queue. My focus is now on the KVM 
specific tests, but I might send couple more general cgroup tests later...

>
>>
>> The modifications of 'cgroup_common.py' library is backward 
>> compatible with general cgroup test.
>>
>> See the commits for details.
>
> Now that we moved to github, I'd like to go with the following model 
> of contribution:
>
> 1) You create a user on github if you don't have one
> 2) Create a public autotest fork
> 3) Commit the changes to a topic branch appropriately named
> 4) Make a pull request to autotest:master
> 5) You still send the patches to the mailing list normally, but 
> mention the pull request URL on the message.
>
> That's it, we are still trying out things, so if this doesn't work 
> out, we'll update the process. Is it possible that you do that and 
> rebase your patches?
>
> Oh, and since patchwork is still out due to DNS outage, could you guys 
> re-spin your client-server patches using the same process I mentioned? 
> Thank you!
>
> Lucas

Hi Lucas,

pull request sent:
https://github.com/autotest/autotest/pull/6

I'll remind Jiří to do the same with the client-server patches...

Cheers,
Lukáš

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests
  2011-09-23 15:02   ` Lukáš Doktor
@ 2011-09-23 17:41     ` Lucas Meneghel Rodrigues
  0 siblings, 0 replies; 11+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-09-23 17:41 UTC (permalink / raw)
  To: Lukáš Doktor; +Cc: autotest, kvm, kvm-autotest, akong, jzupka

On Fri 23 Sep 2011 12:02:12 PM BRT, Lukáš Doktor wrote:
> Dne 23.9.2011 15:36, Lucas Meneghel Rodrigues napsal(a):
>> On 09/22/2011 01:29 PM, Lukas Doktor wrote:
>>> Hi guys,
>>>
>>> Do you remember the discussion about cgroup testing in autotest vs. 
>>> LTP? I hope there won't be any doubts about this one as ground_test 
>>> (+ first 2 subtests) are strictly focused on cgroups features 
>>> enforced on KVM guest systems. Also more subtests will follow if you 
>>> approve the test structure (blkio_throttle, memory, cpus...).
>>
>> Yes, absolutely.
>>
>>>
>>> No matter whether we drop or keep the general 'cgroup' test. The 
>>> 'cgroup_common.py' library can be imported either from 
>>> 'client/tests/cgroup/' directory or directly from 
>>> 'client/tests/kvm/tests/' directory.
>>
>> I don't think we really need to drop the test. It's useful anyway, 
>> even though there are LTP tests that sort of cover ir.
> Well I have some other ones in a queue. My focus is now on the KVM 
> specific tests, but I might send couple more general cgroup tests 
> later...
>
>>
>>>
>>> The modifications of 'cgroup_common.py' library is backward 
>>> compatible with general cgroup test.
>>>
>>> See the commits for details.
>>
>> Now that we moved to github, I'd like to go with the following model 
>> of contribution:
>>
>> 1) You create a user on github if you don't have one
>> 2) Create a public autotest fork
>> 3) Commit the changes to a topic branch appropriately named
>> 4) Make a pull request to autotest:master
>> 5) You still send the patches to the mailing list normally, but 
>> mention the pull request URL on the message.
>>
>> That's it, we are still trying out things, so if this doesn't work 
>> out, we'll update the process. Is it possible that you do that and 
>> rebase your patches?
>>
>> Oh, and since patchwork is still out due to DNS outage, could you 
>> guys re-spin your client-server patches using the same process I 
>> mentioned? Thank you!
>>
>> Lucas
>
> Hi Lucas,
>
> pull request sent:
> https://github.com/autotest/autotest/pull/6
>
> I'll remind Jiří to do the same with the client-server patches...
>
> Cheers,
> Lukáš

Hi Lukas,

I've made comments on your pull request and I'm waiting on your 
changes. Now, I need to figure out if there's a way to cc a mailing 
list on pull requests...

Still figuring out how the workflow will look like from now on, let's 
see how it goes.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests
  2011-09-22 16:29 [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests Lukas Doktor
                   ` (2 preceding siblings ...)
  2011-09-23 13:36 ` [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests Lucas Meneghel Rodrigues
@ 2011-09-27 18:23 ` Lucas Meneghel Rodrigues
  3 siblings, 0 replies; 11+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-09-27 18:23 UTC (permalink / raw)
  To: Lukas Doktor; +Cc: autotest, kvm, kvm-autotest, akong, jzupka

On 09/22/2011 01:29 PM, Lukas Doktor wrote:
> Hi guys,
>
> Do you remember the discussion about cgroup testing in autotest vs. LTP? I hope there won't be any doubts about this one as ground_test (+ first 2 subtests) are strictly focused on cgroups features enforced on KVM guest systems. Also more subtests will follow if you approve the test structure (blkio_throttle, memory, cpus...).
>
> No matter whether we drop or keep the general 'cgroup' test. The 'cgroup_common.py' library can be imported either from 'client/tests/cgroup/' directory or directly from 'client/tests/kvm/tests/' directory.
>
> The modifications of 'cgroup_common.py' library is backward compatible with general cgroup test.
>
> See the commits for details.
>

Ok, after quite a bit of back and forth, pull request merged, thanks Lukas!

> Regards,
> Lukáš Doktor
>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] adds cgroup tests on KVM guests with first test
  2011-09-22 16:29 ` [PATCH 2/2] adds cgroup tests on KVM guests with first test Lukas Doktor
  2011-09-23  8:54   ` Jiri Zupka
@ 2011-11-03  6:04   ` Suqin
  2011-11-03  7:32     ` Lukáš Doktor
  1 sibling, 1 reply; 11+ messages in thread
From: Suqin @ 2011-11-03  6:04 UTC (permalink / raw)
  To: Lukas Doktor; +Cc: kvm, autotest, kvm-autotest

On 09/23/2011 12:29 AM, Lukas Doktor wrote:
> basic structure:
>   * similar to general client/tests/cgroup/ test (imports from the
>     cgroup_common.py)
>   * uses classes for better handling
>   * improved logging and error handling
>   * checks/repair the guests after each subtest
>   * subtest mapping is specified in test dictionary in cgroup.py
>   * allows to specify tests/repetions in tests_base.cfg
>              (cgroup_tests = "re1[:loops] re2[:loops] ...")
>
> TestBlkioBandwidthWeight{Read,Write}:
>   * Two similar tests for blkio.weight functionality inside the guest using
>     direct io and virtio_blk driver
>   * Function:
>   1) On 2 VMs adds small (10MB) virtio_blk disk
>   2) Assigns each to different cgroup and sets blkio.weight 100/1000
>   3) Runs dd with flag=direct (read/write) from the virtio_blk disk
>      repeatidly
>   4) After 1 minute checks the results. If the ratio is better then 1:3,
>      test passes
>
> Signed-off-by: Lukas Doktor<ldoktor@redhat.com>
> ---
>   client/tests/kvm/subtests.cfg.sample |    7 +
>   client/tests/kvm/tests/cgroup.py     |  316 ++++++++++++++++++++++++++++++++++
>   2 files changed, 323 insertions(+), 0 deletions(-)
>   create mode 100644 client/tests/cgroup/__init__.py
>   create mode 100644 client/tests/kvm/tests/cgroup.py
>
> diff --git a/client/tests/cgroup/__init__.py b/client/tests/cgroup/__init__.py
> new file mode 100644
> index 0000000..e69de29
> diff --git a/client/tests/kvm/subtests.cfg.sample b/client/tests/kvm/subtests.cfg.sample
> index 74e550b..79e0656 100644
> --- a/client/tests/kvm/subtests.cfg.sample
> +++ b/client/tests/kvm/subtests.cfg.sample
> @@ -848,6 +848,13 @@ variants:
>           only Linux
>           type = iofuzz
>
> +    - cgroup:
> +        type = cgroup
> +        # cgroup_tests = "re1[:loops] re2[:loops] ..."
> +        cgroup_tests = ".*:1"
> +        vms += " vm2"
> +        extra_params += " -snapshot"
>    

you run blkio with snapshot ? sometimes we need to group diff real 
guests not snapshot

> +
>       - virtio_console: install setup image_copy unattended_install.cdrom
>           only Linux
>           vms = ''
> diff --git a/client/tests/kvm/tests/cgroup.py b/client/tests/kvm/tests/cgroup.py
> new file mode 100644
> index 0000000..4d0ec43
> --- /dev/null
> +++ b/client/tests/kvm/tests/cgroup.py
> @@ -0,0 +1,316 @@
> +"""
> +cgroup autotest test (on KVM guest)
> +@author: Lukas Doktor<ldoktor@redhat.com>
> +@copyright: 2011 Red Hat, Inc.
> +"""
> +import logging, re, sys, tempfile, time, traceback
> +from autotest_lib.client.common_lib import error
> +from autotest_lib.client.bin import utils
> +from autotest_lib.client.tests.cgroup.cgroup_common import Cgroup, CgroupModules
> +
> +def run_cgroup(test, params, env):
> +    """
> +    Tests the cgroup functions on KVM guests.
> +     * Uses variable tests (marked by TODO comment) to map the subtests
> +    """
> +    vms = None
> +    tests = None
> +
> +    # Tests
> +    class _TestBlkioBandwidth:
> +        """
> +        BlkioBandwidth dummy test
> +         * Use it as a base class to an actual test!
> +         * self.dd_cmd and attr '_set_properties' have to be implemented
> +         * It prepares 2 vms and run self.dd_cmd to simultaniously stress the
> +            machines. After 1 minute it kills the dd and gather the throughput
> +            informations.
> +        """
> +        def __init__(self, vms, modules):
> +            """
> +            Initialization
> +            @param vms: list of vms
> +            @param modules: initialized cgroup module class
> +            """
> +            self.vms = vms      # Virt machines
> +            self.modules = modules          # cgroup module handler
> +            self.blkio = Cgroup('blkio', '')    # cgroup blkio handler
> +            self.files = []     # Temporary files (files of virt disks)
> +            self.devices = []   # Temporary virt devices (PCI drive 1 per vm)
> +            self.dd_cmd = None  # DD command used to test the throughput
> +
> +        def cleanup(self):
> +            """
> +            Cleanup
> +            """
> +            err = ""
> +            try:
> +                for i in range (2):
> +                    vms[i].monitor.cmd("pci_del %s" % self.devices[i])
> +                    self.files[i].close()
> +            except Exception, inst:
> +                err += "\nCan't remove PCI drive: %s" % inst
> +            try:
> +                del(self.blkio)
> +            except Exception, inst:
> +                err += "\nCan't remove Cgroup: %s" % inst
> +
> +            if err:
> +                logging.error("Some parts of cleanup failed:%s", err)
> +                raise error.TestError("Some parts of cleanup failed:%s" % err)
> +
> +        def init(self):
> +            """
> +            Initialization
> +             * assigns vm1 and vm2 into cgroups and sets the properties
> +             * creates a new virtio device and adds it into vms
> +            """
> +            if test.tagged_testname.find('virtio_blk') == -1:
> +                logging.warn("You are executing non-virtio_blk test but this "
> +                             "particular subtest uses manually added "
> +                             "'virtio_blk' device.")
>    

emm... we can also run blkio test with ide.

> +            if not self.dd_cmd:
> +                raise error.TestError("Corrupt class, aren't you trying to run "
> +                                      "parent _TestBlkioBandwidth() function?")
> +            if len(self.vms)<  2:
> +                raise error.TestError("Test needs at least 2 vms.")
> +
> +            # cgroups
> +            pwd = []
> +            blkio = self.blkio
> +            if blkio.initialize(self.modules):
> +                raise error.TestError("Could not initialize blkio Cgroup")
> +            for i in range(2):
> +                pwd.append(blkio.mk_cgroup())
> +                if pwd[i] == None:
> +                    raise error.TestError("Can't create cgroup")
> +                if blkio.set_cgroup(self.vms[i].get_shell_pid(), pwd[i]):
> +                    raise error.TestError("Could not set cgroup")
> +                # Move all existing threads into cgroup
> +                for tmp in utils.system_output("ps -L --ppid=%d -o lwp"
> +                                % self.vms[i].get_shell_pid()).split('\n')[1:]:
> +                    if blkio.set_cgroup(int(tmp), pwd[i]):
> +                        raise error.TestError("Could not set cgroup")
> +            if self.blkio.set_property("blkio.weight", 100, pwd[0]):
>    

it's better to set bandwidth configurable. and there maybe more than 2 
groups.

> +                raise error.TestError("Could not set blkio.weight")
> +            if self.blkio.set_property("blkio.weight", 1000, pwd[1]):
> +                raise error.TestError("Could not set blkio.weight")
> +
> +            # Add dumm drives
> +            for i in range(2):
>    
> +                self.files.append(tempfile.NamedTemporaryFile(
> +                                        prefix="cgroup-disk-",
> +                                        suffix=".iso"))
> +                utils.system("dd if=/dev/zero of=%s bs=1M count=10&>/dev/null"
> +                             % (self.files[i].name))
> +                out = vms[i].monitor.cmd("pci_add auto storage file=%s,"
> +                                "if=virtio,snapshot=off,cache=off"
> +                                % (self.files[i].name))
> +                out = re.search(r'OK domain (\d+), bus (\d+), slot (\d+), '
> +                                 'function \d+', out).groups()
> +                self.devices.append("%s:%s:%s" % out)
> +
> +
> +        def run(self):
> +            """
> +            Actual test:
> +             * executes self.dd_cmd simultanously on both vms.
> +            """
> +            sessions = []
> +            out = []
> +            sessions.append(vms[0].wait_for_login(timeout=30))
> +            sessions.append(vms[1].wait_for_login(timeout=30))
> +            sessions.append(vms[0].wait_for_login(timeout=30))
> +            sessions.append(vms[1].wait_for_login(timeout=30))
> +            sessions[0].sendline(self.dd_cmd)
> +            sessions[1].sendline(self.dd_cmd)
> +            time.sleep(60)
> +
> +            cmd = "rm -f /tmp/cgroup_lock; killall -9 dd"
> +            sessions[2].sendline(cmd)
> +            sessions[3].sendline(cmd)
> +            re_dd = (r'(\d+) bytes \(\d+\.*\d* \w*\) copied, (\d+\.*\d*) s, '
> +                      '\d+\.*\d* \w./s')
> +            out = []
> +            for i in range(2):
> +                out.append(sessions[i].read_up_to_prompt())
> +                out[i] = [int(_[0])/float(_[1])
> +                            for _ in re.findall(re_dd, out[i])[1:-1]]
> +                logging.debug("dd(%d) output: %s", i, out[i])
> +                out[i] = [min(out[i]), sum(out[i])/len(out[i]), max(out[i]),
> +                          len(out[i])]
> +
> +            for session in sessions:
> +                session.close()
> +
> +            logging.debug("dd values (min,avg,max,ddloops):\nout1: %s\nout2: %s"
> +                          ,out[0], out[1])
> +
> +            out1 = out[0][1]
> +            out2 = out[1][1]
> +            # In theory out1 should be 10times smaller, than out2.
> +            if out1*3>  out2:
> +                raise error.TestFail("dd values: %s:%s (1:%f), limit 1:2.5"
> +                                     ", theoretical: 1:10"
> +                                     % (out1, out2, out2/out1))
> +            else:
> +                logging.info("dd values: %s:%s (1:%s)", out1, out2, out2/out1)
> +
> +
> +
> +    class TestBlkioBandwidthWeigthRead(_TestBlkioBandwidth):
> +        """
> +        Tests the blkio.weight capability using simultanious read on 2 vms
> +        """
> +        def __init__(self, vms, modules):
> +            """
> +            Initialization
> +            @param vms: list of vms
> +            @param modules: initialized cgroup module class
> +            """
> +            _TestBlkioBandwidth.__init__(self, vms, modules)
> +            self.dd_cmd = ("export FILE=$(ls /dev/vd* | tail -n 1); touch "
> +                           "/tmp/cgroup_lock ; while [ -e /tmp/cgroup_lock ];"
> +                           "do dd if=$FILE of=/dev/null iflag=direct bs=100K;"
> +                           "done")
> +
> +
> +    class TestBlkioBandwidthWeigthWrite(_TestBlkioBandwidth):
> +        """
> +        Tests the blkio.weight capability using simultanious write on 2 vms
> +        """
> +        def __init__(self, vms, modules):
> +            """
> +            Initialization
> +            @param vms: list of vms
> +            @param modules: initialized cgroup module class
> +            """
> +            _TestBlkioBandwidth.__init__(self, vms, modules)
> +            self.dd_cmd = ('export FILE=$(ls /dev/vd* | tail -n 1); touch '
> +                           '/tmp/cgroup_lock ; while [ -e /tmp/cgroup_lock ];'
> +                           'do dd if=/dev/zero of=$FILE oflag=direct bs=100K;'
> +                           'done')
> +
> +
> +    def _check_vms(vms):
> +        """
> +        Checks the vitality of VM
> +        @param vms: list of vm's
> +        """
> +        for i in range(len(vms)):
> +            vms[i].verify_alive()
> +            _ = vms[i].wait_for_login(timeout=60)
> +            out = _.cmd_output("dmesg -c")
> +            _.close()
> +            del(_)
> +            if out.find("BUG") != -1:
> +                logging.error("BUG occured in dmesg:\n%s", out)
> +                logging.warn("recreate VM(%s)", i)
> +                # The vm have to be recreate to reset the qemu PCI state
> +                vms[i].create()
> +
> +
> +    # Setup
> +    # TODO: Add all new tests here
> +    tests = {"blkio_bandwidth_weigth_read"  : TestBlkioBandwidthWeigthRead,
> +             "blkio_bandwidth_weigth_write" : TestBlkioBandwidthWeigthWrite,
> +            }
> +    modules = CgroupModules()
> +    if (modules.init(['cpuset', 'cpu', 'cpuacct', 'memory', 'devices',
> +                      'freezer', 'net_cls', 'blkio'])<= 0):
> +        raise error.TestFail('Can\'t mount any cgroup modules')
> +    # Add all vms
> +    vms = []
> +    for vm in params.get("vms", "main_vm").split():
> +        vm = env.get_vm(vm)
> +        vm.verify_alive()
> +        timeout = int(params.get("login_timeout", 360))
> +        _ = vm.wait_for_login(timeout=timeout)
> +        _.close()
> +        del(_)
> +        vms.append(vm)
> +
> +
> +    # Execute tests
> +    results = ""
> +    # cgroup_tests = "re1[:loops] re2[:loops] ... ... ..."
> +    for j in params.get("cgroup_tests").split():
> +        try:
> +            loops = int(j[j.rfind(':')+1:])
> +            j = j[:j.rfind(':')]
> +        except:
> +            loops = 1
> +        for _loop in range(loops):
> +            for i in [_ for _ in tests.keys() if re.match(j, _)]:
> +                logging.info("%s: Entering the test", i)
> +                try:
> +                    _check_vms(vms)
> +                    tst = tests[i](vms, modules)
> +                    tst.init()
> +                    tst.run()
> +                except error.TestFail, inst:
> +                    logging.error("%s: Leaving, test FAILED (TestFail): %s",
> +                                  i, inst)
> +                    results += "\n * %s: Test FAILED (TestFail): %s" % (i, inst)
> +                    try:
> +                        tst.cleanup()
> +                    except Exception, inst:
> +                        tmps = ""
> +                        for tmp in traceback.format_exception(
> +                                                sys.exc_info()[0],
> +                                                sys.exc_info()[1],
> +                                                sys.exc_info()[2]):
> +                            tmps += "%s cleanup: %s" % (i, tmp)
> +                        logging.info("%s: cleanup also failed\n%s", i, tmps)
> +                except error.TestError, inst:
> +                    tmps = ""
> +                    for tmp in traceback.format_exception(
> +                                            sys.exc_info()[0],
> +                                            sys.exc_info()[1],
> +                                            sys.exc_info()[2]):
> +                        tmps += "%s: %s" % (i, tmp)
> +                    logging.error("%s: Leaving, test FAILED (TestError): %s",
> +                                  i, tmps)
> +                    results += "\n * %s: Test FAILED (TestError): %s"% (i, inst)
> +                    try:
> +                        tst.cleanup()
> +                    except Exception, inst:
> +                        logging.warn("%s: cleanup also failed: %s\n", i, inst)
> +                except Exception, inst:
> +                    tmps = ""
> +                    for tmp in traceback.format_exception(
> +                                            sys.exc_info()[0],
> +                                            sys.exc_info()[1],
> +                                            sys.exc_info()[2]):
> +                        tmps += "%s: %s" % (i, tmp)
> +                    logging.error("%s: Leaving, test FAILED (Exception): %s",
> +                                  i, tmps)
> +                    results += "\n * %s: Test FAILED (Exception): %s"% (i, inst)
> +                    try:
> +                        tst.cleanup()
> +                    except Exception, inst:
> +                        logging.warn("%s: cleanup also failed: %s\n", i, inst)
> +                else:
> +                    try:
> +                        tst.cleanup()
> +                    except Exception, inst:
> +                        tmps = ""
> +                        for tmp in traceback.format_exception(
> +                                                sys.exc_info()[0],
> +                                                sys.exc_info()[1],
> +                                                sys.exc_info()[2]):
> +                            tmps += "%s cleanup: %s" % (i, tmp)
> +                        logging.info("%s: Leaving, test passed but cleanup "
> +                                     "FAILED\n%s", i, tmps)
> +                        results += ("\n * %s: Test passed but cleanup FAILED"
> +                                    % (i))
> +                    else:
> +                        logging.info("%s: Leaving, test PASSED", i)
> +                        results += "\n * %s: Test PASSED" % (i)
> +
> +    logging.info("SUM: All tests finished (%d PASS / %d FAIL = %d TOTAL)%s",
> +                 results.count("PASSED"), results.count("FAILED"),
> +                 (results.count("PASSED")+results.count("FAILED")), results)
> +    if results.count("FAILED"):
> +        raise error.TestFail("Some subtests failed")
> +
>    

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] adds cgroup tests on KVM guests with first test
  2011-11-03  6:04   ` Suqin
@ 2011-11-03  7:32     ` Lukáš Doktor
  0 siblings, 0 replies; 11+ messages in thread
From: Lukáš Doktor @ 2011-11-03  7:32 UTC (permalink / raw)
  To: Suqin; +Cc: autotest, kvm, kvm-autotest, akong, lmr, jzupka

Dne 3.11.2011 07:04, Suqin napsal(a):
> On 09/23/2011 12:29 AM, Lukas Doktor wrote:
>> basic structure:
>>   * similar to general client/tests/cgroup/ test (imports from the
>>     cgroup_common.py)
>>   * uses classes for better handling
>>   * improved logging and error handling
>>   * checks/repair the guests after each subtest
>>   * subtest mapping is specified in test dictionary in cgroup.py
>>   * allows to specify tests/repetions in tests_base.cfg
>>              (cgroup_tests = "re1[:loops] re2[:loops] ...")
>>
>> TestBlkioBandwidthWeight{Read,Write}:
>>   * Two similar tests for blkio.weight functionality inside the guest 
>> using
>>     direct io and virtio_blk driver
>>   * Function:
>>   1) On 2 VMs adds small (10MB) virtio_blk disk
>>   2) Assigns each to different cgroup and sets blkio.weight 100/1000
>>   3) Runs dd with flag=direct (read/write) from the virtio_blk disk
>>      repeatidly
>>   4) After 1 minute checks the results. If the ratio is better then 1:3,
>>      test passes
>>
>> Signed-off-by: Lukas Doktor<ldoktor@redhat.com>
>> ---
>>   client/tests/kvm/subtests.cfg.sample |    7 +
>>   client/tests/kvm/tests/cgroup.py     |  316 
>> ++++++++++++++++++++++++++++++++++
>>   2 files changed, 323 insertions(+), 0 deletions(-)
>>   create mode 100644 client/tests/cgroup/__init__.py
>>   create mode 100644 client/tests/kvm/tests/cgroup.py
>>
>> diff --git a/client/tests/cgroup/__init__.py 
>> b/client/tests/cgroup/__init__.py
>> new file mode 100644
>> index 0000000..e69de29
>> diff --git a/client/tests/kvm/subtests.cfg.sample 
>> b/client/tests/kvm/subtests.cfg.sample
>> index 74e550b..79e0656 100644
>> --- a/client/tests/kvm/subtests.cfg.sample
>> +++ b/client/tests/kvm/subtests.cfg.sample
>> @@ -848,6 +848,13 @@ variants:
>>           only Linux
>>           type = iofuzz
>>
>> +    - cgroup:
>> +        type = cgroup
>> +        # cgroup_tests = "re1[:loops] re2[:loops] ..."
>> +        cgroup_tests = ".*:1"
>> +        vms += " vm2"
>> +        extra_params += " -snapshot"
>
> you run blkio with snapshot ? sometimes we need to group diff real 
> guests not snapshot
The actual tested disks are added inside the test with additional 
parameter snapshot=off. I'm using snapshot on the main disk only and 
because the VM dies quite often (usually during cleanup part).

>
>> +
>>       - virtio_console: install setup image_copy 
>> unattended_install.cdrom
>>           only Linux
>>           vms = ''
>> diff --git a/client/tests/kvm/tests/cgroup.py 
>> b/client/tests/kvm/tests/cgroup.py
>> new file mode 100644
>> index 0000000..4d0ec43
>> --- /dev/null
>> +++ b/client/tests/kvm/tests/cgroup.py
>> @@ -0,0 +1,316 @@
>> +"""
>> +cgroup autotest test (on KVM guest)
>> +@author: Lukas Doktor<ldoktor@redhat.com>
>> +@copyright: 2011 Red Hat, Inc.
>> +"""
>> +import logging, re, sys, tempfile, time, traceback
>> +from autotest_lib.client.common_lib import error
>> +from autotest_lib.client.bin import utils
>> +from autotest_lib.client.tests.cgroup.cgroup_common import Cgroup, 
>> CgroupModules
>> +
>> +def run_cgroup(test, params, env):
>> +    """
>> +    Tests the cgroup functions on KVM guests.
>> +     * Uses variable tests (marked by TODO comment) to map the subtests
>> +    """
>> +    vms = None
>> +    tests = None
>> +
>> +    # Tests
>> +    class _TestBlkioBandwidth:
>> +        """
>> +        BlkioBandwidth dummy test
>> +         * Use it as a base class to an actual test!
>> +         * self.dd_cmd and attr '_set_properties' have to be 
>> implemented
>> +         * It prepares 2 vms and run self.dd_cmd to simultaniously 
>> stress the
>> +            machines. After 1 minute it kills the dd and gather the 
>> throughput
>> +            informations.
>> +        """
>> +        def __init__(self, vms, modules):
>> +            """
>> +            Initialization
>> +            @param vms: list of vms
>> +            @param modules: initialized cgroup module class
>> +            """
>> +            self.vms = vms      # Virt machines
>> +            self.modules = modules          # cgroup module handler
>> +            self.blkio = Cgroup('blkio', '')    # cgroup blkio handler
>> +            self.files = []     # Temporary files (files of virt disks)
>> +            self.devices = []   # Temporary virt devices (PCI drive 
>> 1 per vm)
>> +            self.dd_cmd = None  # DD command used to test the 
>> throughput
>> +
>> +        def cleanup(self):
>> +            """
>> +            Cleanup
>> +            """
>> +            err = ""
>> +            try:
>> +                for i in range (2):
>> +                    vms[i].monitor.cmd("pci_del %s" % self.devices[i])
>> +                    self.files[i].close()
>> +            except Exception, inst:
>> +                err += "\nCan't remove PCI drive: %s" % inst
>> +            try:
>> +                del(self.blkio)
>> +            except Exception, inst:
>> +                err += "\nCan't remove Cgroup: %s" % inst
>> +
>> +            if err:
>> +                logging.error("Some parts of cleanup failed:%s", err)
>> +                raise error.TestError("Some parts of cleanup 
>> failed:%s" % err)
>> +
>> +        def init(self):
>> +            """
>> +            Initialization
>> +             * assigns vm1 and vm2 into cgroups and sets the properties
>> +             * creates a new virtio device and adds it into vms
>> +            """
>> +            if test.tagged_testname.find('virtio_blk') == -1:
>> +                logging.warn("You are executing non-virtio_blk test 
>> but this "
>> +                             "particular subtest uses manually added "
>> +                             "'virtio_blk' device.")
>
> emm... we can also run blkio test with ide.
Anything except virtio_blk seems to ignore the flag=direct flag in dd. I 
tested this booth manually and using this test. Anyway if you are sure 
it should be supported, I can use get_device_driver() function and use 
the tested VM's driver.

>
>> +            if not self.dd_cmd:
>> +                raise error.TestError("Corrupt class, aren't you 
>> trying to run "
>> +                                      "parent _TestBlkioBandwidth() 
>> function?")
>> +            if len(self.vms)<  2:
>> +                raise error.TestError("Test needs at least 2 vms.")
>> +
>> +            # cgroups
>> +            pwd = []
>> +            blkio = self.blkio
>> +            if blkio.initialize(self.modules):
>> +                raise error.TestError("Could not initialize blkio 
>> Cgroup")
>> +            for i in range(2):
>> +                pwd.append(blkio.mk_cgroup())
>> +                if pwd[i] == None:
>> +                    raise error.TestError("Can't create cgroup")
>> +                if blkio.set_cgroup(self.vms[i].get_shell_pid(), 
>> pwd[i]):
>> +                    raise error.TestError("Could not set cgroup")
>> +                # Move all existing threads into cgroup
>> +                for tmp in utils.system_output("ps -L --ppid=%d -o lwp"
>> +                                % 
>> self.vms[i].get_shell_pid()).split('\n')[1:]:
>> +                    if blkio.set_cgroup(int(tmp), pwd[i]):
>> +                        raise error.TestError("Could not set cgroup")
>> +            if self.blkio.set_property("blkio.weight", 100, pwd[0]):
>
> it's better to set bandwidth configurable. and there maybe more than 2 
> groups.
Yes, I can imagine parameter which defines cgroup values.

>
>> +                raise error.TestError("Could not set blkio.weight")
>> +            if self.blkio.set_property("blkio.weight", 1000, pwd[1]):
>> +                raise error.TestError("Could not set blkio.weight")
>> +
>> +            # Add dumm drives
>> +            for i in range(2):
>>    +                self.files.append(tempfile.NamedTemporaryFile(
>> +                                        prefix="cgroup-disk-",
>> +                                        suffix=".iso"))
>> +                utils.system("dd if=/dev/zero of=%s bs=1M 
>> count=10&>/dev/null"
>> +                             % (self.files[i].name))
>> +                out = vms[i].monitor.cmd("pci_add auto storage 
>> file=%s,"
>> +                                "if=virtio,snapshot=off,cache=off"
>> +                                % (self.files[i].name))
>> +                out = re.search(r'OK domain (\d+), bus (\d+), slot 
>> (\d+), '
>> +                                 'function \d+', out).groups()
>> +                self.devices.append("%s:%s:%s" % out)
>> +
>> +
>> +        def run(self):
>> +            """
>> +            Actual test:
>> +             * executes self.dd_cmd simultanously on both vms.
>> +            """
>> +            sessions = []
>> +            out = []
>> +            sessions.append(vms[0].wait_for_login(timeout=30))
>> +            sessions.append(vms[1].wait_for_login(timeout=30))
>> +            sessions.append(vms[0].wait_for_login(timeout=30))
>> +            sessions.append(vms[1].wait_for_login(timeout=30))
>> +            sessions[0].sendline(self.dd_cmd)
>> +            sessions[1].sendline(self.dd_cmd)
>> +            time.sleep(60)
>> +
>> +            cmd = "rm -f /tmp/cgroup_lock; killall -9 dd"
>> +            sessions[2].sendline(cmd)
>> +            sessions[3].sendline(cmd)
>> +            re_dd = (r'(\d+) bytes \(\d+\.*\d* \w*\) copied, 
>> (\d+\.*\d*) s, '
>> +                      '\d+\.*\d* \w./s')
>> +            out = []
>> +            for i in range(2):
>> +                out.append(sessions[i].read_up_to_prompt())
>> +                out[i] = [int(_[0])/float(_[1])
>> +                            for _ in re.findall(re_dd, out[i])[1:-1]]
>> +                logging.debug("dd(%d) output: %s", i, out[i])
>> +                out[i] = [min(out[i]), sum(out[i])/len(out[i]), 
>> max(out[i]),
>> +                          len(out[i])]
>> +
>> +            for session in sessions:
>> +                session.close()
>> +
>> +            logging.debug("dd values (min,avg,max,ddloops):\nout1: 
>> %s\nout2: %s"
>> +                          ,out[0], out[1])
>> +
>> +            out1 = out[0][1]
>> +            out2 = out[1][1]
>> +            # In theory out1 should be 10times smaller, than out2.
>> +            if out1*3>  out2:
>> +                raise error.TestFail("dd values: %s:%s (1:%f), limit 
>> 1:2.5"
>> +                                     ", theoretical: 1:10"
>> +                                     % (out1, out2, out2/out1))
>> +            else:
>> +                logging.info("dd values: %s:%s (1:%s)", out1, out2, 
>> out2/out1)
>> +
>> +
>> +
>> +    class TestBlkioBandwidthWeigthRead(_TestBlkioBandwidth):
>> +        """
>> +        Tests the blkio.weight capability using simultanious read on 
>> 2 vms
>> +        """
>> +        def __init__(self, vms, modules):
>> +            """
>> +            Initialization
>> +            @param vms: list of vms
>> +            @param modules: initialized cgroup module class
>> +            """
>> +            _TestBlkioBandwidth.__init__(self, vms, modules)
>> +            self.dd_cmd = ("export FILE=$(ls /dev/vd* | tail -n 1); 
>> touch "
>> +                           "/tmp/cgroup_lock ; while [ -e 
>> /tmp/cgroup_lock ];"
>> +                           "do dd if=$FILE of=/dev/null iflag=direct 
>> bs=100K;"
>> +                           "done")
>> +
>> +
>> +    class TestBlkioBandwidthWeigthWrite(_TestBlkioBandwidth):
>> +        """
>> +        Tests the blkio.weight capability using simultanious write 
>> on 2 vms
>> +        """
>> +        def __init__(self, vms, modules):
>> +            """
>> +            Initialization
>> +            @param vms: list of vms
>> +            @param modules: initialized cgroup module class
>> +            """
>> +            _TestBlkioBandwidth.__init__(self, vms, modules)
>> +            self.dd_cmd = ('export FILE=$(ls /dev/vd* | tail -n 1); 
>> touch '
>> +                           '/tmp/cgroup_lock ; while [ -e 
>> /tmp/cgroup_lock ];'
>> +                           'do dd if=/dev/zero of=$FILE oflag=direct 
>> bs=100K;'
>> +                           'done')
>> +
>> +
>> +    def _check_vms(vms):
>> +        """
>> +        Checks the vitality of VM
>> +        @param vms: list of vm's
>> +        """
>> +        for i in range(len(vms)):
>> +            vms[i].verify_alive()
>> +            _ = vms[i].wait_for_login(timeout=60)
>> +            out = _.cmd_output("dmesg -c")
>> +            _.close()
>> +            del(_)
>> +            if out.find("BUG") != -1:
>> +                logging.error("BUG occured in dmesg:\n%s", out)
>> +                logging.warn("recreate VM(%s)", i)
>> +                # The vm have to be recreate to reset the qemu PCI 
>> state
>> +                vms[i].create()
>> +
>> +
>> +    # Setup
>> +    # TODO: Add all new tests here
>> +    tests = {"blkio_bandwidth_weigth_read"  : 
>> TestBlkioBandwidthWeigthRead,
>> +             "blkio_bandwidth_weigth_write" : 
>> TestBlkioBandwidthWeigthWrite,
>> +            }
>> +    modules = CgroupModules()
>> +    if (modules.init(['cpuset', 'cpu', 'cpuacct', 'memory', 'devices',
>> +                      'freezer', 'net_cls', 'blkio'])<= 0):
>> +        raise error.TestFail('Can\'t mount any cgroup modules')
>> +    # Add all vms
>> +    vms = []
>> +    for vm in params.get("vms", "main_vm").split():
>> +        vm = env.get_vm(vm)
>> +        vm.verify_alive()
>> +        timeout = int(params.get("login_timeout", 360))
>> +        _ = vm.wait_for_login(timeout=timeout)
>> +        _.close()
>> +        del(_)
>> +        vms.append(vm)
>> +
>> +
>> +    # Execute tests
>> +    results = ""
>> +    # cgroup_tests = "re1[:loops] re2[:loops] ... ... ..."
>> +    for j in params.get("cgroup_tests").split():
>> +        try:
>> +            loops = int(j[j.rfind(':')+1:])
>> +            j = j[:j.rfind(':')]
>> +        except:
>> +            loops = 1
>> +        for _loop in range(loops):
>> +            for i in [_ for _ in tests.keys() if re.match(j, _)]:
>> +                logging.info("%s: Entering the test", i)
>> +                try:
>> +                    _check_vms(vms)
>> +                    tst = tests[i](vms, modules)
>> +                    tst.init()
>> +                    tst.run()
>> +                except error.TestFail, inst:
>> +                    logging.error("%s: Leaving, test FAILED 
>> (TestFail): %s",
>> +                                  i, inst)
>> +                    results += "\n * %s: Test FAILED (TestFail): %s" 
>> % (i, inst)
>> +                    try:
>> +                        tst.cleanup()
>> +                    except Exception, inst:
>> +                        tmps = ""
>> +                        for tmp in traceback.format_exception(
>> +                                                sys.exc_info()[0],
>> +                                                sys.exc_info()[1],
>> +                                                sys.exc_info()[2]):
>> +                            tmps += "%s cleanup: %s" % (i, tmp)
>> +                        logging.info("%s: cleanup also failed\n%s", 
>> i, tmps)
>> +                except error.TestError, inst:
>> +                    tmps = ""
>> +                    for tmp in traceback.format_exception(
>> +                                            sys.exc_info()[0],
>> +                                            sys.exc_info()[1],
>> +                                            sys.exc_info()[2]):
>> +                        tmps += "%s: %s" % (i, tmp)
>> +                    logging.error("%s: Leaving, test FAILED 
>> (TestError): %s",
>> +                                  i, tmps)
>> +                    results += "\n * %s: Test FAILED (TestError): 
>> %s"% (i, inst)
>> +                    try:
>> +                        tst.cleanup()
>> +                    except Exception, inst:
>> +                        logging.warn("%s: cleanup also failed: 
>> %s\n", i, inst)
>> +                except Exception, inst:
>> +                    tmps = ""
>> +                    for tmp in traceback.format_exception(
>> +                                            sys.exc_info()[0],
>> +                                            sys.exc_info()[1],
>> +                                            sys.exc_info()[2]):
>> +                        tmps += "%s: %s" % (i, tmp)
>> +                    logging.error("%s: Leaving, test FAILED 
>> (Exception): %s",
>> +                                  i, tmps)
>> +                    results += "\n * %s: Test FAILED (Exception): 
>> %s"% (i, inst)
>> +                    try:
>> +                        tst.cleanup()
>> +                    except Exception, inst:
>> +                        logging.warn("%s: cleanup also failed: 
>> %s\n", i, inst)
>> +                else:
>> +                    try:
>> +                        tst.cleanup()
>> +                    except Exception, inst:
>> +                        tmps = ""
>> +                        for tmp in traceback.format_exception(
>> +                                                sys.exc_info()[0],
>> +                                                sys.exc_info()[1],
>> +                                                sys.exc_info()[2]):
>> +                            tmps += "%s cleanup: %s" % (i, tmp)
>> +                        logging.info("%s: Leaving, test passed but 
>> cleanup "
>> +                                     "FAILED\n%s", i, tmps)
>> +                        results += ("\n * %s: Test passed but 
>> cleanup FAILED"
>> +                                    % (i))
>> +                    else:
>> +                        logging.info("%s: Leaving, test PASSED", i)
>> +                        results += "\n * %s: Test PASSED" % (i)
>> +
>> +    logging.info("SUM: All tests finished (%d PASS / %d FAIL = %d 
>> TOTAL)%s",
>> +                 results.count("PASSED"), results.count("FAILED"),
>> +                 (results.count("PASSED")+results.count("FAILED")), 
>> results)
>> +    if results.count("FAILED"):
>> +        raise error.TestFail("Some subtests failed")
>> +
>
Thanks for review, this test is already upstream (with some changes). 
I'm going to finish couple more subtests (cpu, cpuset, memory) and than 
backport it into our virtlab. It should be straight forward as it's a 
single-file test with few additional utils.py functions.

Regards,
Lukáš

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2011-11-03  7:32 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-09-22 16:29 [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests Lukas Doktor
2011-09-22 16:29 ` [PATCH 1/2] cgroup: cgroup_common.py bugfixies and modifications Lukas Doktor
2011-09-23  8:54   ` Jiri Zupka
2011-09-22 16:29 ` [PATCH 2/2] adds cgroup tests on KVM guests with first test Lukas Doktor
2011-09-23  8:54   ` Jiri Zupka
2011-11-03  6:04   ` Suqin
2011-11-03  7:32     ` Lukáš Doktor
2011-09-23 13:36 ` [KVM-autotest][PATCH] cgroup test with KVM guest +first subtests Lucas Meneghel Rodrigues
2011-09-23 15:02   ` Lukáš Doktor
2011-09-23 17:41     ` Lucas Meneghel Rodrigues
2011-09-27 18:23 ` Lucas Meneghel Rodrigues

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).