kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [KVM-autotest] tests.cgroup: Add cpu_cfs test and bugfixes
@ 2011-12-23 14:37 Lukas Doktor
  2011-12-23 14:37 ` [PATCH 1/2] [KVM-autotest] tests.cgroup: Add cpu_cfs_util test Lukas Doktor
  2011-12-23 14:37 ` [PATCH 2/2] [KVM-autotest] tests.cgroup: Variable limits in TestMemory* and bugfixes Lukas Doktor
  0 siblings, 2 replies; 3+ messages in thread
From: Lukas Doktor @ 2011-12-23 14:37 UTC (permalink / raw)
  To: autotest, kvm, kvm-autotest, lmr, ldoktor, jzupka

Hi guys,

Please find the cpu_cfs_util test which tests the usage of CFS scheduler when cpu.cfs_quota_us is set. It start-ups enough VMs to overcome twice all host's CPUs and sets 50% quota for each process. In theory the utilisation should be exactly 100%. We measure the difference between all host-times and guest times (/proc/stat). The utilisation should be over 95%. (turn off your Firefox, movie players and Thunderbirds if you are running it directly on your box ;-) )

The second patch contains minor changes and bugfixes.

See the pull request:
https://github.com/autotest/autotest/pull/120

Best regards and merry Christmas,
Lukáš Doktor

_______________________________________________
Autotest mailing list
Autotest@test.kernel.org
http://test.kernel.org/cgi-bin/mailman/listinfo/autotest

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH 1/2] [KVM-autotest] tests.cgroup: Add cpu_cfs_util test
  2011-12-23 14:37 [KVM-autotest] tests.cgroup: Add cpu_cfs test and bugfixes Lukas Doktor
@ 2011-12-23 14:37 ` Lukas Doktor
  2011-12-23 14:37 ` [PATCH 2/2] [KVM-autotest] tests.cgroup: Variable limits in TestMemory* and bugfixes Lukas Doktor
  1 sibling, 0 replies; 3+ messages in thread
From: Lukas Doktor @ 2011-12-23 14:37 UTC (permalink / raw)
  To: autotest, kvm, kvm-autotest, lmr, ldoktor, jzupka

This test tests the cpu.cfs_* subsystem. It's focused on known issue with
multiple threads in related cpu cgroup with defined cpu.cfs_quota_us. CFS
might schedule those threads very inefficiently.

* Add TestCpuCFS

Signed-off-by: Lukas Doktor <ldoktor@redhat.com>
---
 client/tests/kvm/subtests.cfg.sample |    2 +
 client/tests/kvm/tests/cgroup.py     |  168 ++++++++++++++++++++++++++++++++++
 2 files changed, 170 insertions(+), 0 deletions(-)

diff --git a/client/tests/kvm/subtests.cfg.sample b/client/tests/kvm/subtests.cfg.sample
index 3d47fb4..ca75fe4 100644
--- a/client/tests/kvm/subtests.cfg.sample
+++ b/client/tests/kvm/subtests.cfg.sample
@@ -1048,6 +1048,8 @@ variants:
         type = cgroup
         # cgroup_tests = "re1[:loops] re2[:loops] ..."
         cgroup_tests = ".*:1"
+        #cgroup_memory_limit_kb = 2097152   # default 1/2 guest ram
+        #cgroup_memory_move_mb = 2048   # default 1/2 guest ram
         vms += " vm2"
         extra_params += " -snapshot"
 
diff --git a/client/tests/kvm/tests/cgroup.py b/client/tests/kvm/tests/cgroup.py
index 95ca1fd..148e4e7 100644
--- a/client/tests/kvm/tests/cgroup.py
+++ b/client/tests/kvm/tests/cgroup.py
@@ -1315,6 +1315,173 @@ def run_cgroup(test, params, env):
             self.speeds = [100000, 100000]
 
 
+    class TestCpuCFSUtil:
+        """
+        Tests the utilisation of scheduler when cgroup cpu.cfs_* setting is
+        set. There is a known issue with scheduler and multiple CPUs.
+        """
+        def __init__(self, vms, modules):
+            """
+            Initialization
+            @param vms: list of vms
+            @param modules: initialized cgroup module class
+            """
+            self.vms = vms[:]      # Copy of virt machines
+            self.vms_count = len(vms) # Original number of vms
+            self.modules = modules          # cgroup module handler
+            self.cgroup = Cgroup('cpu', '')   # cgroup blkio handler
+            self.sessions = []    # ssh sessions
+            self.serials = []   # serial consoles
+
+
+        def cleanup(self):
+            """ Cleanup """
+            err = ""
+            del(self.cgroup)
+
+            for i in range(len(self.vms)):
+                self.serials[i].sendline('rm -f /tmp/cgroup-cpu-lock')
+            del self.serials
+
+            for i in range(len(self.sessions)):
+                try:
+                    self.sessions[i].close()
+                except Exception, failure_detail:
+                    err += ("\nCan't close ssh connection %s" % i)
+            del self.sessions
+
+            for vm in self.vms[self.vms_count:]:
+                try:
+                    vm.destroy(gracefully=False)
+                except Exception, failure_detail:
+                    err += "\nCan't destroy added VM: %s" % failure_detail
+            del self.vms
+
+            if err:
+                logging.error("Some cleanup operations failed: %s", err)
+                raise error.TestError("Some cleanup operations failed: %s"
+                                      % err)
+
+
+        def init(self):
+            """
+            Initialization
+             * creates additional VMs (vm_cpus = 2 * host_cpus)
+             * creates cgroup for each VM and subcgroup for theirs vCPUs
+               (../vm[123..]/vcpu[012..])
+            """
+            def get_cpu_pids(vm, smp=None):
+                """ Get pids of all VM's vcpus """
+                cpu_pids = re.findall(r'thread_id=(\d+)',
+                                      vm.monitor.info("cpus"))
+                if not cpu_pids:
+                    raise error.TestFail("Can't get 'info cpus' from monitor")
+                if smp is not None and len(cpu_pids) != smp:
+                    raise error.TestFail("Incorrect no vcpus: monitor = %s, "
+                                         "params = %s" % (len(cpu_pids), smp))
+                return cpu_pids
+
+            self.cgroup.initialize(self.modules)
+            host_cpus = open('/proc/cpuinfo').read().count('model name')
+            smp = int(params.get('smp', 1))
+            vm_cpus = 0
+            # Prepare existing vms (if necessarily)
+            for i in range(min(len(self.vms), 2 * host_cpus / smp)):
+                # create "/vm[123]/ cgroups and set cfs_quota_us to no_vcpus*50%
+                vm_pwd = self.cgroup.mk_cgroup()
+                self.cgroup.set_property("cpu.cfs_period_us", 100000, vm_pwd)
+                self.cgroup.set_property("cpu.cfs_quota_us", 50000*smp, vm_pwd)
+                assign_vm_into_cgroup(self.vms[i], self.cgroup, vm_pwd)
+                cpu_pids = get_cpu_pids(self.vms[i], smp)
+                for j in range(smp):
+                    # create "/vm*/vcpu[123] cgroups and set cfs_quota_us to 50%
+                    vcpu_pwd = self.cgroup.mk_cgroup(vm_pwd)
+                    self.cgroup.set_property("cpu.cfs_period_us", 100000,
+                                                                    vcpu_pwd)
+                    self.cgroup.set_property("cpu.cfs_quota_us", 50000,
+                                                                    vcpu_pwd)
+                    self.cgroup.set_cgroup(int(cpu_pids[j]), vcpu_pwd)
+                    self.sessions.append(self.vms[i].wait_for_login(timeout=30))
+                    vm_cpus += 1
+                self.serials.append(self.vms[i].wait_for_serial_login(
+                                                                    timeout=30))
+                self.serials[-1].cmd("touch /tmp/cgroup-cpu-lock")
+            timeout = 1.5 * int(params.get("login_timeout", 360))
+            _params = params
+            # Add additional vms (if necessarily)
+            i = 0
+            while vm_cpus < 2 * host_cpus:
+                vm_name = "clone%s" % i
+                smp = min(vm_cpus, 2 * host_cpus - vm_cpus)
+                _params['smp'] = smp
+                self.vms.append(self.vms[0].clone(vm_name, _params))
+                env.register_vm(vm_name, self.vms[-1])
+                self.vms[-1].create()
+                vm_pwd = self.cgroup.mk_cgroup()
+                self.cgroup.set_property("cpu.cfs_period_us", 100000, vm_pwd)
+                self.cgroup.set_property("cpu.cfs_quota_us", 50000*smp, vm_pwd)
+                assign_vm_into_cgroup(self.vms[-1], self.cgroup, vm_pwd)
+                cpu_pids = get_cpu_pids(self.vms[-1], smp)
+                for j in range(smp):
+                    vcpu_pwd = self.cgroup.mk_cgroup(vm_pwd)
+                    self.cgroup.set_property("cpu.cfs_period_us", 100000,
+                                                                    vcpu_pwd)
+                    self.cgroup.set_property("cpu.cfs_quota_us", 50000,
+                                                                    vcpu_pwd)
+                    self.cgroup.set_cgroup(int(cpu_pids[j]), vcpu_pwd)
+                    self.sessions.append(self.vms[-1].wait_for_login(
+                                                            timeout=timeout))
+                self.serials.append(self.vms[-1].wait_for_serial_login(
+                                                                    timeout=30))
+                self.serials[-1].cmd("touch /tmp/cgroup-cpu-lock")
+                vm_cpus += smp
+                i += 1
+
+
+        def run(self):
+            """
+            Actual test:
+            It run stressers on all vcpus, gather host CPU utilisation and
+            verifies that guests use at least 95% of CPU time.
+            """
+            stats = []
+            cmd = "renice -n 10 $$; "
+            cmd += "while [ -e /tmp/cgroup-cpu-lock ]; do :; done"
+            for session in self.sessions:
+                session.sendline(cmd)
+
+            # Test
+            time.sleep(1)
+            stats.append(open('/proc/stat', 'r').readline())
+            time.sleep(1)
+            stats.append(open('/proc/stat', 'r').readline())
+            time.sleep(9)
+            stats.append(open('/proc/stat', 'r').readline())
+            time.sleep(49)
+            stats.append(open('/proc/stat', 'r').readline())
+            for session in self.serials:
+                session.sendline('rm -f /tmp/cgroup-cpu-lock')
+
+            # Verification
+            print stats
+            stats[0] = [int(_) for _ in stats[0].split()[1:]]
+            stats[0] = [sum(stats[0][0:8]), sum(stats[0][8:])]
+            for i in range(1, len(stats)):
+                stats[i] = [int(_) for _ in stats[i].split()[1:]]
+                try:
+                    stats[i] = (float(sum(stats[i][8:]) - stats[0][1]) /
+                                        (sum(stats[i][0:8]) - stats[0][0]))
+                except ZeroDivisionError:
+                    logging.error("ZeroDivisionError in stats calculation")
+                    stats[i] = False
+            print stats
+            for i in range(1, len(stats)):
+                if stats[i] < 0.95:
+                    raise error.TestFail("Guest time is not >95%% %s" % stats)
+
+            logging.info("Guest times are over 95%%: %s", stats)
+            return "Guest times are over 95%%: %s" % stats
+
     class TestCpusetCpus:
         """
         Tests the cpuset.cpus cgroup feature. It stresses all VM's CPUs
@@ -1606,6 +1773,7 @@ def run_cgroup(test, params, env):
              "memory_limit"                 : TestMemoryLimit,
              "cpu_share_10"                 : TestCpuShare10,
              "cpu_share_50"                 : TestCpuShare50,
+             "cpu_cfs_util"                 : TestCpuCFSUtil,
              "cpuset_cpus"                  : TestCpusetCpus,
              "cpuset_cpus_switching"        : TestCpusetCpusSwitching,
             }
-- 
1.7.7.4

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 2/2] [KVM-autotest] tests.cgroup: Variable limits in TestMemory* and bugfixes
  2011-12-23 14:37 [KVM-autotest] tests.cgroup: Add cpu_cfs test and bugfixes Lukas Doktor
  2011-12-23 14:37 ` [PATCH 1/2] [KVM-autotest] tests.cgroup: Add cpu_cfs_util test Lukas Doktor
@ 2011-12-23 14:37 ` Lukas Doktor
  1 sibling, 0 replies; 3+ messages in thread
From: Lukas Doktor @ 2011-12-23 14:37 UTC (permalink / raw)
  To: autotest, kvm, kvm-autotest, lmr, ldoktor, jzupka

* Add cgroup_memory_limit_kb, cgroup_memory_move_mb variables into cfg
  to allow user specify the amount of tested/stressed memory
* Use iflag=fullblock to force allocate the full blocks in dd
* FIX cpu enumeration in cpuset subtest

Signed-off-by: Lukas Doktor <ldoktor@redhat.com>
---
 client/tests/kvm/tests/cgroup.py |   36 ++++++++++++++++++++++++------------
 1 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/client/tests/kvm/tests/cgroup.py b/client/tests/kvm/tests/cgroup.py
index 148e4e7..c01b3c1 100644
--- a/client/tests/kvm/tests/cgroup.py
+++ b/client/tests/kvm/tests/cgroup.py
@@ -945,8 +945,13 @@ def run_cgroup(test, params, env):
             sessions.append(self.vm.wait_for_login(timeout=30))
             sessions.append(self.vm.wait_for_login(timeout=30))
 
-            size = int(params.get('mem', 512)) / 2   # Use half of the memory
-            sessions[0].sendline('dd if=/dev/zero of=/dev/null bs=%sM' % size)
+            # Don't allow to specify more than 1/2 of the VM's memory
+            size = int(params.get('mem', 1024)) / 2
+            if params.get('cgroup_memory_move_mb') is not None:
+                size = min(size, int(params.get('cgroup_memory_move_mb')))
+
+            sessions[0].sendline('dd if=/dev/zero of=/dev/null bs=%dM '
+                                 'iflag=fullblock' % size)
             time.sleep(2)
 
             sessions[1].cmd('killall -SIGUSR1 dd')
@@ -975,7 +980,7 @@ def run_cgroup(test, params, env):
                                       (dd_res[0], dd_res[1], dd_res[2],
                                        dd_res[3]))
 
-            return ("Memory move succeeded")
+            return ("Guest moved 10times while creating %dMB blocks" % size)
 
 
     class TestMemoryLimit:
@@ -1009,13 +1014,16 @@ def run_cgroup(test, params, env):
             """
             Initialization: prepares the cgroup and starts new VM inside it.
             """
-            # Use half of the VM's memory (in KB)
-            mem = int(int(params.get('mem', 1024)) * 512)
+            # Don't allow to specify more than 1/2 of the VM's memory
+            mem = int(params.get('mem', 1024)) * 512
+            if params.get('cgroup_memory_limit_kb') is not None:
+                mem = min(mem, int(params.get('cgroup_memory_limit_kb')))
+
             self.cgroup.initialize(self.modules)
             self.cgroup.mk_cgroup()
             self.cgroup.set_property('memory.move_charge_at_immigrate', '3',
                                      self.cgroup.cgroups[0])
-            self.cgroup.set_property_h('memory.limit_in_bytes', "%sK" % mem,
+            self.cgroup.set_property_h('memory.limit_in_bytes', "%dK" % mem,
                                      self.cgroup.cgroups[0])
 
             logging.info("Expected VM reload")
@@ -1040,9 +1048,13 @@ def run_cgroup(test, params, env):
             """
             session = self.vm.wait_for_login(timeout=30)
 
-            # Convert into KB, use 0.6 * guest memory (== * 614.4)
-            mem = int(int(params.get('mem', 1024)) * 615)
-            session.sendline('dd if=/dev/zero of=/dev/null bs=%sK count=1' %mem)
+            # Use 1.1 * memory_limit block size
+            mem = int(params.get('mem', 1024)) * 512
+            if params.get('cgroup_memory_limit_kb') is not None:
+                mem = min(mem, int(params.get('cgroup_memory_limit_kb')))
+            mem *= 1.1
+            session.sendline('dd if=/dev/zero of=/dev/null bs=%dK count=1 '
+                             'iflag=fullblock' %mem)
 
             # Check every 0.1s VM memory usage. Limit the maximum execution time
             # to mem / 10 (== mem * 0.1 sleeps)
@@ -1077,7 +1089,7 @@ def run_cgroup(test, params, env):
                 raise error.TestFail("VM didn't consume expected amount of "
                                      "memory. Output of dd cmd: %s" % out)
 
-            return ("Limits were enforced successfully.")
+            return ("Created %dMB block with 1.1 limit overcommit" % (mem/1024))
 
 
     class _TestCpuShare(object):
@@ -1599,8 +1611,8 @@ def run_cgroup(test, params, env):
 
             # CPUs: O_O_, Stat: 200
             cpus = "0"
-            for i in range(2, min(vm_cpus*2, no_cpus)):
-                cpus += ",%d" % i*2
+            for i in range(1, min(vm_cpus, (no_cpus/2))):
+                cpus += ",%d" % (i*2)
             self.cgroup.set_property('cpuset.cpus', cpus, 0)
             stats.append((_test_it(tst_time), cpus))
 
-- 
1.7.7.4

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2011-12-23 14:37 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-12-23 14:37 [KVM-autotest] tests.cgroup: Add cpu_cfs test and bugfixes Lukas Doktor
2011-12-23 14:37 ` [PATCH 1/2] [KVM-autotest] tests.cgroup: Add cpu_cfs_util test Lukas Doktor
2011-12-23 14:37 ` [PATCH 2/2] [KVM-autotest] tests.cgroup: Variable limits in TestMemory* and bugfixes Lukas Doktor

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).