public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
       [not found] <120253480.1747631244710010660.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
@ 2009-06-11  8:53 ` Michael Goldish
  2009-06-11  9:46   ` Yolkfull Chow
  2009-06-12 13:27   ` [KVM-AUTOTEST PATCH] stress_boot - Boot VMs until one of them becomes unresponsive - Version2 Yolkfull Chow
  0 siblings, 2 replies; 6+ messages in thread
From: Michael Goldish @ 2009-06-11  8:53 UTC (permalink / raw)
  To: Yolkfull Chow; +Cc: Uri Lublin, kvm


----- "Yolkfull Chow" <yzhou@redhat.com> wrote:

> Michael, these are the backtrace messages:
> 
> ...
> 20090611-064959 
> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
> 
> ERROR: run_once: Test failed: [Errno 12] Cannot allocate memory
> 20090611-064959 
> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
> 
> DEBUG: run_once: Postprocessing on error...
> 20090611-065000 
> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
> 
> DEBUG: postprocess_vm: Postprocessing VM 'vm1'...
> 20090611-065000 
> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
> 
> DEBUG: postprocess_vm: VM object found in environment
> 20090611-065000 
> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
> 
> DEBUG: send_monitor_cmd: Sending monitor command: screendump 
> /kvm-autotest/client/results/default/kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>/debug/post_vm1.ppm
> 20090611-065000 
> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
> 
> DEBUG: run_once: Contents of environment: {'vm__vm1': <kvm_vm.VM 
> instance at 0x92999a28>}
> post-test sysinfo error:
> Traceback (most recent call last):
>    File "/kvm-autotest/client/common_lib/log.py", line 58, in
> decorated_func
>      fn(*args, **dargs)
>    File "/kvm-autotest/client/bin/base_sysinfo.py", line 213, in 
> log_after_each_test
>      log.run(test_sysinfodir)
>    File "/kvm-autotest/client/bin/base_sysinfo.py", line 112, in run
>      shell=True, env=env)
>    File "/usr/lib64/python2.4/subprocess.py", line 412, in call
>      return Popen(*args, **kwargs).wait()
>    File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
>      errread, errwrite)
>    File "/usr/lib64/python2.4/subprocess.py", line 902, in
> _execute_child
>      self.pid = os.fork()
> OSError: [Errno 12] Cannot allocate memory
> 2009-06-11 06:50:02,859 Configuring logger for client level
>          FAIL    
> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>    
> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>    
> timestamp=1244717402    localtime=Jun 11 06:50:02    Unhandled
> OSError: 
> [Errno 12] Cannot allocate memory
>            Traceback (most recent call last):
>              File "/kvm-autotest/client/common_lib/test.py", line 304,
> 
> in _exec
>                self.execute(*p_args, **p_dargs)
>              File "/kvm-autotest/client/common_lib/test.py", line 187,
> 
> in execute
>                self.run_once(*args, **dargs)
>              File 
> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_runtest_2.py", line 145,
> 
> in run_once
>                routine_obj.routine(self, params, env)
>              File 
> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_tests.py", line 3071, in
> 
> run_boot_vms
>                curr_vm_session = kvm_utils.wait_for(curr_vm.ssh_login,
> 
> 240, 0, 2)
>              File 
> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 797, in
> 
> wait_for
>                output = func()
>              File
> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_vm.py", 
> line 728, in ssh_login
>                session = kvm_utils.ssh(address, port, username, 
> password, prompt, timeout)
>              File 
> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 553, in
> ssh
>                return remote_login(command, password, prompt, "\n",
> timeout)
>              File 
> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 431, in
> 
> remote_login
>                sub = kvm_spawn(command, linesep)
>              File 
> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 114, in
> 
> __init__
>                (pid, fd) = pty.fork()
>              File "/usr/lib64/python2.4/pty.py", line 108, in fork
>                pid = os.fork()
>            OSError: [Errno 12] Cannot allocate memory
> Persistent state variable __group_level now set to 1
>      END FAIL    
> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>    
> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>    
> timestamp=1244717403    localtime=Jun 11 06:50:03
> Dropping caches
> 2009-06-11 06:50:03,409 running: sync
> JOB ERROR: Unhandled OSError: [Errno 12] Cannot allocate memory
> Traceback (most recent call last):
>    File "/kvm-autotest/client/bin/job.py", line 978, in step_engine
>      execfile(self.control, global_control_vars, global_control_vars)
>    File "/kvm-autotest/client/control", line 1030, in ?
>      cfg_to_test("kvm_tests.cfg")
>    File "/kvm-autotest/client/control", line 1013, in cfg_to_test
>      current_status = job.run_test("kvm_runtest_2", params=dict, 
> tag=tagname)
>    File "/kvm-autotest/client/bin/job.py", line 44, in wrapped
>      utils.drop_caches()
>    File "/kvm-autotest/client/bin/base_utils.py", line 638, in
> drop_caches
>      utils.system("sync")
>    File "/kvm-autotest/client/common_lib/utils.py", line 510, in
> system
>      stdout_tee=sys.stdout, stderr_tee=sys.stderr).exit_status
>    File "/kvm-autotest/client/common_lib/utils.py", line 330, in run
>      bg_job = join_bg_jobs(
>    File "/kvm-autotest/client/common_lib/utils.py", line 37, in
> __init__
>      stdin=stdin)
>    File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
>      errread, errwrite)
>    File "/usr/lib64/python2.4/subprocess.py", line 902, in
> _execute_child
>      self.pid = os.fork()
> OSError: [Errno 12] Cannot allocate memory
> 
> Persistent state variable __group_level now set to 0
> END ABORT    ----    ----    timestamp=1244717418    localtime=Jun 11
> 
> 06:50:18    Unhandled OSError: [Errno 12] Cannot allocate memory
>    Traceback (most recent call last):
>      File "/kvm-autotest/client/bin/job.py", line 978, in step_engine
>        execfile(self.control, global_control_vars,
> global_control_vars)
>      File "/kvm-autotest/client/control", line 1030, in ?
>        cfg_to_test("kvm_tests.cfg")
>      File "/kvm-autotest/client/control", line 1013, in cfg_to_test
>        current_status = job.run_test("kvm_runtest_2", params=dict, 
> tag=tagname)
>      File "/kvm-autotest/client/bin/job.py", line 44, in wrapped
>        utils.drop_caches()
>      File "/kvm-autotest/client/bin/base_utils.py", line 638, in
> drop_caches
>        utils.system("sync")
>      File "/kvm-autotest/client/common_lib/utils.py", line 510, in
> system
>        stdout_tee=sys.stdout, stderr_tee=sys.stderr).exit_status
>      File "/kvm-autotest/client/common_lib/utils.py", line 330, in
> run
>        bg_job = join_bg_jobs(
>      File "/kvm-autotest/client/common_lib/utils.py", line 37, in
> __init__
>        stdin=stdin)
>      File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
>        errread, errwrite)
>      File "/usr/lib64/python2.4/subprocess.py", line 902, in
> _execute_child
>        self.pid = os.fork()
>    OSError: [Errno 12] Cannot allocate memory
> [root@dhcp-66-70-9 kvm_runtest_2]#

Thanks. It does indeed look like a legitimate OSError in os.fork().

BTW, do you have any idea why the result dir has such a weird name?
/kvm-autotest/client/results/default/kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>/debug/post_vm1.ppm

And why sometimes a normal looking tag appears (in the log messages):
no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024

Why all the [] and <> in the weird version? Did you somehow do that intentionally, or is it some sort of bug?
And why is 'None' there? The tag is supposed to be the test's 'shortname', which is determined by kvm_config.py
as it parses kvm_tests.cfg (or the config file you're using).

Normally the result dir should just be kvm_runtest_2.shortname, and in this case:
kvm_runtest_2.no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
  2009-06-11  8:53 ` [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive Michael Goldish
@ 2009-06-11  9:46   ` Yolkfull Chow
  2009-06-12 13:27   ` [KVM-AUTOTEST PATCH] stress_boot - Boot VMs until one of them becomes unresponsive - Version2 Yolkfull Chow
  1 sibling, 0 replies; 6+ messages in thread
From: Yolkfull Chow @ 2009-06-11  9:46 UTC (permalink / raw)
  To: Michael Goldish; +Cc: Uri Lublin, kvm

On 06/11/2009 04:53 PM, Michael Goldish wrote:
> ----- "Yolkfull Chow"<yzhou@redhat.com>  wrote:
>
>    
>> Michael, these are the backtrace messages:
>>
>> ...
>> 20090611-064959
>> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>>
>> ERROR: run_once: Test failed: [Errno 12] Cannot allocate memory
>> 20090611-064959
>> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>>
>> DEBUG: run_once: Postprocessing on error...
>> 20090611-065000
>> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>>
>> DEBUG: postprocess_vm: Postprocessing VM 'vm1'...
>> 20090611-065000
>> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>>
>> DEBUG: postprocess_vm: VM object found in environment
>> 20090611-065000
>> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>>
>> DEBUG: send_monitor_cmd: Sending monitor command: screendump
>> /kvm-autotest/client/results/default/kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>/debug/post_vm1.ppm
>> 20090611-065000
>> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>>
>> DEBUG: run_once: Contents of environment: {'vm__vm1':<kvm_vm.VM
>> instance at 0x92999a28>}
>> post-test sysinfo error:
>> Traceback (most recent call last):
>>     File "/kvm-autotest/client/common_lib/log.py", line 58, in
>> decorated_func
>>       fn(*args, **dargs)
>>     File "/kvm-autotest/client/bin/base_sysinfo.py", line 213, in
>> log_after_each_test
>>       log.run(test_sysinfodir)
>>     File "/kvm-autotest/client/bin/base_sysinfo.py", line 112, in run
>>       shell=True, env=env)
>>     File "/usr/lib64/python2.4/subprocess.py", line 412, in call
>>       return Popen(*args, **kwargs).wait()
>>     File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
>>       errread, errwrite)
>>     File "/usr/lib64/python2.4/subprocess.py", line 902, in
>> _execute_child
>>       self.pid = os.fork()
>> OSError: [Errno 12] Cannot allocate memory
>> 2009-06-11 06:50:02,859 Configuring logger for client level
>>           FAIL
>> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>>
>> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>>
>> timestamp=1244717402    localtime=Jun 11 06:50:02    Unhandled
>> OSError:
>> [Errno 12] Cannot allocate memory
>>             Traceback (most recent call last):
>>               File "/kvm-autotest/client/common_lib/test.py", line 304,
>>
>> in _exec
>>                 self.execute(*p_args, **p_dargs)
>>               File "/kvm-autotest/client/common_lib/test.py", line 187,
>>
>> in execute
>>                 self.run_once(*args, **dargs)
>>               File
>> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_runtest_2.py", line 145,
>>
>> in run_once
>>                 routine_obj.routine(self, params, env)
>>               File
>> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_tests.py", line 3071, in
>>
>> run_boot_vms
>>                 curr_vm_session = kvm_utils.wait_for(curr_vm.ssh_login,
>>
>> 240, 0, 2)
>>               File
>> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 797, in
>>
>> wait_for
>>                 output = func()
>>               File
>> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_vm.py",
>> line 728, in ssh_login
>>                 session = kvm_utils.ssh(address, port, username,
>> password, prompt, timeout)
>>               File
>> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 553, in
>> ssh
>>                 return remote_login(command, password, prompt, "\n",
>> timeout)
>>               File
>> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 431, in
>>
>> remote_login
>>                 sub = kvm_spawn(command, linesep)
>>               File
>> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 114, in
>>
>> __init__
>>                 (pid, fd) = pty.fork()
>>               File "/usr/lib64/python2.4/pty.py", line 108, in fork
>>                 pid = os.fork()
>>             OSError: [Errno 12] Cannot allocate memory
>> Persistent state variable __group_level now set to 1
>>       END FAIL
>> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>>
>> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>>
>> timestamp=1244717403    localtime=Jun 11 06:50:03
>> Dropping caches
>> 2009-06-11 06:50:03,409 running: sync
>> JOB ERROR: Unhandled OSError: [Errno 12] Cannot allocate memory
>> Traceback (most recent call last):
>>     File "/kvm-autotest/client/bin/job.py", line 978, in step_engine
>>       execfile(self.control, global_control_vars, global_control_vars)
>>     File "/kvm-autotest/client/control", line 1030, in ?
>>       cfg_to_test("kvm_tests.cfg")
>>     File "/kvm-autotest/client/control", line 1013, in cfg_to_test
>>       current_status = job.run_test("kvm_runtest_2", params=dict,
>> tag=tagname)
>>     File "/kvm-autotest/client/bin/job.py", line 44, in wrapped
>>       utils.drop_caches()
>>     File "/kvm-autotest/client/bin/base_utils.py", line 638, in
>> drop_caches
>>       utils.system("sync")
>>     File "/kvm-autotest/client/common_lib/utils.py", line 510, in
>> system
>>       stdout_tee=sys.stdout, stderr_tee=sys.stderr).exit_status
>>     File "/kvm-autotest/client/common_lib/utils.py", line 330, in run
>>       bg_job = join_bg_jobs(
>>     File "/kvm-autotest/client/common_lib/utils.py", line 37, in
>> __init__
>>       stdin=stdin)
>>     File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
>>       errread, errwrite)
>>     File "/usr/lib64/python2.4/subprocess.py", line 902, in
>> _execute_child
>>       self.pid = os.fork()
>> OSError: [Errno 12] Cannot allocate memory
>>
>> Persistent state variable __group_level now set to 0
>> END ABORT    ----    ----    timestamp=1244717418    localtime=Jun 11
>>
>> 06:50:18    Unhandled OSError: [Errno 12] Cannot allocate memory
>>     Traceback (most recent call last):
>>       File "/kvm-autotest/client/bin/job.py", line 978, in step_engine
>>         execfile(self.control, global_control_vars,
>> global_control_vars)
>>       File "/kvm-autotest/client/control", line 1030, in ?
>>         cfg_to_test("kvm_tests.cfg")
>>       File "/kvm-autotest/client/control", line 1013, in cfg_to_test
>>         current_status = job.run_test("kvm_runtest_2", params=dict,
>> tag=tagname)
>>       File "/kvm-autotest/client/bin/job.py", line 44, in wrapped
>>         utils.drop_caches()
>>       File "/kvm-autotest/client/bin/base_utils.py", line 638, in
>> drop_caches
>>         utils.system("sync")
>>       File "/kvm-autotest/client/common_lib/utils.py", line 510, in
>> system
>>         stdout_tee=sys.stdout, stderr_tee=sys.stderr).exit_status
>>       File "/kvm-autotest/client/common_lib/utils.py", line 330, in
>> run
>>         bg_job = join_bg_jobs(
>>       File "/kvm-autotest/client/common_lib/utils.py", line 37, in
>> __init__
>>         stdin=stdin)
>>       File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
>>         errread, errwrite)
>>       File "/usr/lib64/python2.4/subprocess.py", line 902, in
>> _execute_child
>>         self.pid = os.fork()
>>     OSError: [Errno 12] Cannot allocate memory
>> [root@dhcp-66-70-9 kvm_runtest_2]#
>>      
> Thanks. It does indeed look like a legitimate OSError in os.fork().
>
> BTW, do you have any idea why the result dir has such a weird name?
> /kvm-autotest/client/results/default/kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>/debug/post_vm1.ppm
>
> And why sometimes a normal looking tag appears (in the log messages):
> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024
>
> Why all the [] and<>  in the weird version? Did you somehow do that intentionally, or is it some sort of bug?
> And why is 'None' there? The tag is supposed to be the test's 'shortname', which is determined by kvm_config.py
> as it parses kvm_tests.cfg (or the config file you're using).
>
> Normally the result dir should just be kvm_runtest_2.shortname, and in this case:
> kvm_runtest_2.no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024
>    
Hi Michael, it's not any sort of defect or problem,  we just did that 
intentionally for some purpose. And now we had unified it with 
autotest's style. Thank you so much for kindly remind. :)

-- 
Yolkfull
Regards,


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [KVM-AUTOTEST PATCH] stress_boot - Boot VMs until one of them becomes unresponsive - Version2
  2009-06-11  8:53 ` [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive Michael Goldish
  2009-06-11  9:46   ` Yolkfull Chow
@ 2009-06-12 13:27   ` Yolkfull Chow
  2009-06-18  8:17     ` Lucas Meneghel Rodrigues
  1 sibling, 1 reply; 6+ messages in thread
From: Yolkfull Chow @ 2009-06-12 13:27 UTC (permalink / raw)
  To: kvm; +Cc: Michael Goldish, Uri Lublin

[-- Attachment #1: Type: text/plain, Size: 176 bytes --]

Following are the differences between version 1:

1) use framework to destroy VMs except the main_vm
2) use snapshot to boot other VMs except the first one


Regards,
Yolkfull

[-- Attachment #2: stress_boot_v2.patch --]
[-- Type: text/plain, Size: 4431 bytes --]

diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
index 9428162..1f553b4 100644
--- a/client/tests/kvm/kvm.py
+++ b/client/tests/kvm/kvm.py
@@ -53,6 +53,7 @@ class kvm(test.test):
                 "autotest":     test_routine("kvm_tests", "run_autotest"),
                 "kvm_install":  test_routine("kvm_install", "run_kvm_install"),
                 "linux_s3":     test_routine("kvm_tests", "run_linux_s3"),
+		"stress_boot":	test_routine("kvm_tests", "run_stress_boot"),
                 }
 
         # Make it possible to import modules from the test's bindir
diff --git a/client/tests/kvm/kvm_tests.cfg.sample b/client/tests/kvm/kvm_tests.cfg.sample
index c73da7c..ff7abea 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -81,6 +81,10 @@ variants:
     - linux_s3:      install setup
         type = linux_s3
 
+    - stress_boot:
+	type = stress_boot
+	max_vms = 5
+
 # NICs
 variants:
     - @rtl8139:
@@ -101,6 +105,8 @@ variants:
         ssh_status_test_command = echo $?
         username = root
         password = 123456
+	stress_boot:
+	    alive_test_cmd = ps aux
 
         variants:
             - Fedora:
@@ -291,6 +297,8 @@ variants:
         password = 123456
         migrate:
             migration_test_command = ver && vol
+	stress_boot:
+	    alive_test_cmd = systeminfo
 
         variants:
             - Win2000:
diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py
index 54d2a7a..fde33bb 100644
--- a/client/tests/kvm/kvm_tests.py
+++ b/client/tests/kvm/kvm_tests.py
@@ -466,3 +466,77 @@ def run_linux_s3(test, params, env):
     logging.info("VM resumed after S3")
 
     session.close()
+
+
+def run_stress_boot(tests, params, env):
+    """
+    Boots VMs until one of them becomes unresponsive, and records the maximum
+    number of VMs successfully started:
+    1) boot the first vm
+    2) boot the second vm cloned from the first vm, check whether it boots up
+       and all booted vms can ssh-login
+    3) go on until cannot create VM anymore or cannot allocate memory for VM
+
+    @param test:   kvm test object
+    @param params: Dictionary with the test parameters
+    @param env:    Dictionary with test environment.
+    """
+    # boot the first vm
+    vm = kvm_utils.env_get_vm(env, params.get("main_vm"))
+
+    if not vm:
+        raise error.TestError("VM object not found in environment")
+    if not vm.is_alive():
+        raise error.TestError("VM seems to be dead; Test requires a living VM")
+
+    logging.info("Waiting for first guest to be up...")
+
+    session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
+    if not session:
+        raise error.TestFail("Could not log into first guest")
+
+    num = 1
+    vms = []
+    sessions = [session]
+
+    # boot the VMs
+    while num <= int(params.get("max_vms")):
+        try:
+            vm_name = "vm" + str(num)
+
+            # clone vm according to the first one
+            vm_params = params.copy()
+            vm_params['image_snapshot'] = "yes"
+            vm_params['kill_vm'] = "yes"
+            vm_params['kill_vm_gracefully'] = "no"
+            curr_vm = vm.clone(vm_name, vm_params)
+            kvm_utils.env_register_vm(env, vm_name, curr_vm)
+            params['vms'] += " " + vm_name
+
+            #vms.append(curr_vm)
+            logging.info("Booting guest #%d" % num)
+            if not curr_vm.create():
+                raise error.TestFail("Cannot create VM #%d" % num)
+
+            curr_vm_session = kvm_utils.wait_for(curr_vm.ssh_login, 240, 0, 2)
+            if not curr_vm_session:
+                raise error.TestFail("Could not log into guest #%d" % num)
+
+            logging.info("Guest #%d boots up successfully" % num)
+            sessions.append(curr_vm_session)
+
+            # check whether all previous ssh sessions are responsive
+            for i, vm_session in enumerate(sessions):
+                if vm_session.get_command_status(params.get("alive_test_cmd")):
+                    raise error.TestFail("Session #%d is not responsive" % i)
+            num += 1
+
+        except (error.TestFail, OSError):
+            for se in sessions:
+                se.close()
+            logging.info("Total number booted: %d" % num)
+            raise
+    else:
+        for se in sessions:
+            se.close()
+        logging.info("Total number booted: %d" % num)

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [KVM-AUTOTEST PATCH] stress_boot - Boot VMs until one of them becomes unresponsive - Version2
  2009-06-12 13:27   ` [KVM-AUTOTEST PATCH] stress_boot - Boot VMs until one of them becomes unresponsive - Version2 Yolkfull Chow
@ 2009-06-18  8:17     ` Lucas Meneghel Rodrigues
  2009-06-18  9:16       ` Yolkfull Chow
  0 siblings, 1 reply; 6+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-18  8:17 UTC (permalink / raw)
  To: Yolkfull Chow; +Cc: kvm, Michael Goldish, Uri Lublin

On Fri, 2009-06-12 at 21:27 +0800, Yolkfull Chow wrote:
> Following are the differences between version 1:
> 
> 1) use framework to destroy VMs except the main_vm
> 2) use snapshot to boot other VMs except the first one
> 
> 
> Regards,
> Yolkfull

Hi Yolkfull, Michael and Uri already made a thorough first comment about
your test, and I have a minor thing to note (and I admit I'm being picky
here):

+            # check whether all previous ssh sessions are responsive
+            for i, vm_session in enumerate(sessions):
+                if vm_session.get_command_status(params.get("alive_test_cmd")):
+                    raise error.TestFail("Session #%d is not responsive" % i)
+            num += 1
+
+        except (error.TestFail, OSError):
+            for se in sessions:
+                se.close()
+            logging.info("Total number booted: %d" % num)
+            raise
+    else:
+        for se in sessions:
+            se.close()
+        logging.info("Total number booted: %d" % num)

When the test finishes successfuly, the counter num will be incremented
by one, will break the while condition and later will be used to print
the number of vms successfuly booted. In the end the total number of vms
booted that the test will report is the actual number of vms booted plus
1. To fix this we can either:

 * Just subtract 1 from num at the last info logging call;
 * Remove num initialization and replace the while loop by a

for num in range(1, int(params.get("max_vms")):

this way we don't even need to increment num manually.

It's up to you which one you're going to implement. I have tested your
code and it works fine (aside from the minor cosmetic issue). Once you
send me an updated version, I am going to apply it. 

Thanks for your work!

-- 
Lucas Meneghel Rodrigues
Software Engineer (QE)
Red Hat - Emerging Technologies


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [KVM-AUTOTEST PATCH] stress_boot - Boot VMs until one of them becomes unresponsive - Version2
  2009-06-18  8:17     ` Lucas Meneghel Rodrigues
@ 2009-06-18  9:16       ` Yolkfull Chow
  2009-06-19 13:06         ` Lucas Meneghel Rodrigues
  0 siblings, 1 reply; 6+ messages in thread
From: Yolkfull Chow @ 2009-06-18  9:16 UTC (permalink / raw)
  To: Lucas Meneghel Rodrigues; +Cc: kvm, Michael Goldish, Uri Lublin

[-- Attachment #1: Type: text/plain, Size: 2236 bytes --]

On 06/18/2009 04:17 PM, Lucas Meneghel Rodrigues wrote:
> On Fri, 2009-06-12 at 21:27 +0800, Yolkfull Chow wrote:
>    
>> Following are the differences between version 1:
>>
>> 1) use framework to destroy VMs except the main_vm
>> 2) use snapshot to boot other VMs except the first one
>>
>>
>> Regards,
>> Yolkfull
>>      
> Hi Yolkfull, Michael and Uri already made a thorough first comment about
> your test, and I have a minor thing to note (and I admit I'm being picky
> here):
>
> +            # check whether all previous ssh sessions are responsive
> +            for i, vm_session in enumerate(sessions):
> +                if vm_session.get_command_status(params.get("alive_test_cmd")):
> +                    raise error.TestFail("Session #%d is not responsive" % i)
> +            num += 1
> +
> +        except (error.TestFail, OSError):
> +            for se in sessions:
> +                se.close()
> +            logging.info("Total number booted: %d" % num)
> +            raise
> +    else:
> +        for se in sessions:
> +            se.close()
> +        logging.info("Total number booted: %d" % num)
>
> When the test finishes successfuly, the counter num will be incremented
> by one, will break the while condition and later will be used to print
> the number of vms successfuly booted. In the end the total number of vms
> booted that the test will report is the actual number of vms booted plus
> 1. To fix this we can either:
>
>   * Just subtract 1 from num at the last info logging call;
>   * Remove num initialization and replace the while loop by a
>
> for num in range(1, int(params.get("max_vms")):
>
> this way we don't even need to increment num manually.
>
> It's up to you which one you're going to implement. I have tested your
> code and it works fine (aside from the minor cosmetic issue). Once you
> send me an updated version, I am going to apply it.
>
> Thanks for your work!
>
>    
Hi Lucas,  I also found the number counting problem later after sending 
the patch. I haven't been able to re-send the updated one since I got 
some other things to deal with in these days.   Sorry for that...

Please see attachment for updated version.   Thank you so much.  :)

-- 
Yolkfull
Regards,


[-- Attachment #2: stress_boot.patch --]
[-- Type: text/plain, Size: 4308 bytes --]

diff --git a/client/tests/kvm/kvm.py b/client/tests/kvm/kvm.py
index 9428162..43d7bbc 100644
--- a/client/tests/kvm/kvm.py
+++ b/client/tests/kvm/kvm.py
@@ -53,6 +53,7 @@ class kvm(test.test):
                 "autotest":     test_routine("kvm_tests", "run_autotest"),
                 "kvm_install":  test_routine("kvm_install", "run_kvm_install"),
                 "linux_s3":     test_routine("kvm_tests", "run_linux_s3"),
+                "stress_boot":  test_routine("kvm_tests", "run_stress_boot"),
                 }
 
         # Make it possible to import modules from the test's bindir
diff --git a/client/tests/kvm/kvm_tests.cfg.sample b/client/tests/kvm/kvm_tests.cfg.sample
index 2c0b321..7f4e9b9 100644
--- a/client/tests/kvm/kvm_tests.cfg.sample
+++ b/client/tests/kvm/kvm_tests.cfg.sample
@@ -82,6 +82,11 @@ variants:
     - linux_s3:      install setup
         type = linux_s3
 
+    - stress_boot:
+        type = stress_boot
+        max_vms = 5    
+        alive_test_cmd = ps aux
+
 # NICs
 variants:
     - @rtl8139:
@@ -292,6 +297,8 @@ variants:
         password = 123456
         migrate:
             migration_test_command = ver && vol
+        stress_boot:
+            alive_test_cmd = systeminfo
 
         variants:
             - Win2000:
diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py
index 4270cae..11f7bf0 100644
--- a/client/tests/kvm/kvm_tests.py
+++ b/client/tests/kvm/kvm_tests.py
@@ -474,3 +474,77 @@ def run_linux_s3(test, params, env):
     logging.info("VM resumed after S3")
 
     session.close()
+
+
+def run_stress_boot(tests, params, env):
+    """
+    Boots VMs until one of them becomes unresponsive, and records the maximum
+    number of VMs successfully started:
+    1) boot the first vm
+    2) boot the second vm cloned from the first vm, check whether it boots up
+       and all booted vms can ssh-login
+    3) go on until cannot create VM anymore or cannot allocate memory for VM
+
+    @param test:   kvm test object
+    @param params: Dictionary with the test parameters
+    @param env:    Dictionary with test environment.
+    """
+    # boot the first vm
+    vm = kvm_utils.env_get_vm(env, params.get("main_vm"))
+
+    if not vm:
+        raise error.TestError("VM object not found in environment")
+    if not vm.is_alive():
+        raise error.TestError("VM seems to be dead; Test requires a living VM")
+
+    logging.info("Waiting for first guest to be up...")
+
+    session = kvm_utils.wait_for(vm.ssh_login, 240, 0, 2)
+    if not session:
+        raise error.TestFail("Could not log into first guest")
+
+    num = 2
+    vms = []
+    sessions = [session]
+
+    # boot the VMs
+    while num <= int(params.get("max_vms")):
+        try:
+            vm_name = "vm" + str(num)
+
+            # clone vm according to the first one
+            vm_params = params.copy()
+            vm_params['image_snapshot'] = "yes"
+            vm_params['kill_vm'] = "yes"
+            vm_params['kill_vm_gracefully'] = "no"
+            curr_vm = vm.clone(vm_name, vm_params)
+            kvm_utils.env_register_vm(env, vm_name, curr_vm)
+            params['vms'] += " " + vm_name
+
+            #vms.append(curr_vm)
+            logging.info("Booting guest #%d" % num)
+            if not curr_vm.create():
+                raise error.TestFail("Cannot create VM #%d" % num)
+
+            curr_vm_session = kvm_utils.wait_for(curr_vm.ssh_login, 240, 0, 2)
+            if not curr_vm_session:
+                raise error.TestFail("Could not log into guest #%d" % num)
+
+            logging.info("Guest #%d boots up successfully" % num)
+            sessions.append(curr_vm_session)
+
+            # check whether all previous ssh sessions are responsive
+            for i, vm_session in enumerate(sessions):
+                if vm_session.get_command_status(params.get("alive_test_cmd")):
+                    raise error.TestFail("Session #%d is not responsive" % i)
+            num += 1
+
+        except (error.TestFail, OSError):
+            for se in sessions:
+                se.close()
+            logging.info("Total number booted: %d" % (num - 1))
+            raise
+    else:
+        for se in sessions:
+            se.close()
+        logging.info("Total number booted: %d" % (num -1))

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [KVM-AUTOTEST PATCH] stress_boot - Boot VMs until one of them becomes unresponsive - Version2
  2009-06-18  9:16       ` Yolkfull Chow
@ 2009-06-19 13:06         ` Lucas Meneghel Rodrigues
  0 siblings, 0 replies; 6+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-19 13:06 UTC (permalink / raw)
  To: Yolkfull Chow; +Cc: kvm, Michael Goldish, Uri Lublin

On Thu, 2009-06-18 at 17:16 +0800, Yolkfull Chow wrote:

> Hi Lucas,  I also found the number counting problem later after sending 
> the patch. I haven't been able to re-send the updated one since I got 
> some other things to deal with in these days.   Sorry for that...
> 
> Please see attachment for updated version.   Thank you so much.  :)

Ok, patch applied. Thank you very much for your work!



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2009-06-19 13:06 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <120253480.1747631244710010660.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
2009-06-11  8:53 ` [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive Michael Goldish
2009-06-11  9:46   ` Yolkfull Chow
2009-06-12 13:27   ` [KVM-AUTOTEST PATCH] stress_boot - Boot VMs until one of them becomes unresponsive - Version2 Yolkfull Chow
2009-06-18  8:17     ` Lucas Meneghel Rodrigues
2009-06-18  9:16       ` Yolkfull Chow
2009-06-19 13:06         ` Lucas Meneghel Rodrigues

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox