* [KVM-AUTOTEST PATCH 0/8] Re-submitting some of the patches on the patch queue
@ 2009-06-08 4:01 Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 1/8] kvm_config: Allow for "=" in the value of a config parameter Lucas Meneghel Rodrigues
2009-06-09 8:41 ` [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive Yolkfull Chow
0 siblings, 2 replies; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 4:01 UTC (permalink / raw)
To: kvm; +Cc: Lucas Meneghel Rodrigues
I have rebased some of the patches on the patch queue I sent earlier to the
list. I am sending them to your appreciation, the other will follow.
^ permalink raw reply [flat|nested] 29+ messages in thread
* [KVM-AUTOTEST PATCH 1/8] kvm_config: Allow for "=" in the value of a config parameter
2009-06-08 4:01 [KVM-AUTOTEST PATCH 0/8] Re-submitting some of the patches on the patch queue Lucas Meneghel Rodrigues
@ 2009-06-08 4:01 ` Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [PATCH 1/3] Make possible to use kvm_config as a standalone program Lucas Meneghel Rodrigues
2009-06-08 15:16 ` [KVM-AUTOTEST PATCH 1/8] kvm_config: Allow for "=" in the value of a config parameter Lucas Meneghel Rodrigues
2009-06-09 8:41 ` [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive Yolkfull Chow
1 sibling, 2 replies; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 4:01 UTC (permalink / raw)
To: kvm; +Cc: Lucas Meneghel Rodrigues, David Huff
fix modifies kvm_config.split_and_strip so it will only split once per
line.
example: kernel_args = "ks=floppy console=ttyS0 noacpi"
Signed-off-by: David Huff <dhuff@redhat.com>
---
client/tests/kvm/kvm_config.py | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/client/tests/kvm/kvm_config.py b/client/tests/kvm/kvm_config.py
index 8b6ab15..40f16f1 100755
--- a/client/tests/kvm/kvm_config.py
+++ b/client/tests/kvm/kvm_config.py
@@ -136,7 +136,7 @@ class config:
@param str: String that will be processed
@param sep: Separator that will be used to split the string
"""
- temp = str.split(sep)
+ temp = str.split(sep, 1)
for i in range(len(temp)):
temp[i] = temp[i].strip()
temp[i] = temp[i].strip("\"\'")
--
1.6.2.2
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 1/3] Make possible to use kvm_config as a standalone program.
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 1/8] kvm_config: Allow for "=" in the value of a config parameter Lucas Meneghel Rodrigues
@ 2009-06-08 4:01 ` Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [PATCH 2/3] Fixing bad line breaks Lucas Meneghel Rodrigues
2009-06-08 15:16 ` [KVM-AUTOTEST PATCH 1/8] kvm_config: Allow for "=" in the value of a config parameter Lucas Meneghel Rodrigues
1 sibling, 1 reply; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 4:01 UTC (permalink / raw)
To: kvm; +Cc: Lucas Meneghel Rodrigues
Replace autotest exceptions for standard python exceptions. This
will allow kvm_config.py to be used as a stand alone program.
Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
client/tests/kvm/kvm_config.py | 7 +++----
1 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/client/tests/kvm/kvm_config.py b/client/tests/kvm/kvm_config.py
index 8b6ab15..dda421b 100755
--- a/client/tests/kvm/kvm_config.py
+++ b/client/tests/kvm/kvm_config.py
@@ -1,5 +1,4 @@
import re, os, sys, StringIO
-from autotest_lib.client.common_lib import error
"""
KVM configuration file utility functions.
@@ -356,7 +355,7 @@ class config:
# (inside an exception or inside subvariants)
if restricted:
e_msg = "Using variants in this context is not allowed"
- raise error.AutotestError()
+ raise ValueError(e_msg)
if self.debug and not restricted:
self.__debug_print(indented_line,
"Entering variants block (%d dicts in"
@@ -401,7 +400,7 @@ class config:
words[1])
if not os.path.exists(filename):
e_msg = "Cannot include %s -- file not found" % filename
- raise error.AutotestError(e_msg)
+ raise IOError(e_msg)
new_file = open(filename, "r")
list = self.parse(new_file, list, restricted)
new_file.close()
@@ -409,7 +408,7 @@ class config:
self.__debug_print("", "Leaving file %s" % words[1])
else:
e_msg = "Cannot include anything because no file is open"
- raise error.AutotestError(e_msg)
+ raise ValueError(e_msg)
# Parse multi-line exceptions
# (the block is parsed for each dict separately)
--
1.6.2.2
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 2/3] Fixing bad line breaks
2009-06-08 4:01 ` [PATCH 1/3] Make possible to use kvm_config as a standalone program Lucas Meneghel Rodrigues
@ 2009-06-08 4:01 ` Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 2/8] RHEL-4.7 step files: fix the initial boot barriers Lucas Meneghel Rodrigues
0 siblings, 1 reply; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 4:01 UTC (permalink / raw)
To: kvm; +Cc: Lucas Meneghel Rodrigues
During the conversion of logging statements, some bad line
continuation were introduced. This patch fixes the mistakes.
Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
client/tests/kvm/kvm_tests.py | 12 ++++++------
client/tests/kvm/make_html_report.py | 5 ++---
2 files changed, 8 insertions(+), 9 deletions(-)
diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py
index cccc48e..9adea6f 100644
--- a/client/tests/kvm/kvm_tests.py
+++ b/client/tests/kvm/kvm_tests.py
@@ -274,8 +274,8 @@ def run_autotest(test, params, env):
copy = True
# Perform the copy
if copy:
- logging.info("Copying %s.tar.bz2 to guest (file is missing or has a"
- " different size)..." % test_name)
+ logging.info("Copying %s.tar.bz2 to guest \
+ (file is missing or has a different size)..." % test_name)
if not vm.scp_to_remote(tarred_test_path, ""):
raise error.TestFail("Could not copy %s.tar.bz2 to guest" %
test_name)
@@ -291,8 +291,8 @@ def run_autotest(test, params, env):
# Extract <test_name>.tar.bz2 into autotest/tests
logging.info("Extracting %s.tar.bz2..." % test_name)
- status = session.get_command_status("tar xvfj %s.tar.bz2 -C "
- "autotest/tests" % test_name)
+ status = session.get_command_status("tar xvfj %s.tar.bz2 -C \
+ autotest/tests" % test_name)
if status != 0:
raise error.TestFail("Could not extract %s.tar.bz2" % test_name)
@@ -321,8 +321,8 @@ def run_autotest(test, params, env):
status_fail = False
if result_list == []:
status_fail = True
- message_fail = "Test '%s' did not produce any recognizable"
- " results" % test_name
+ message_fail = "Test '%s' did not produce any recognizable \
+ results" % test_name
for result in result_list:
logging.info(str(result))
if result[1] == "FAIL":
diff --git a/client/tests/kvm/make_html_report.py b/client/tests/kvm/make_html_report.py
index 6aed39e..5b2e579 100755
--- a/client/tests/kvm/make_html_report.py
+++ b/client/tests/kvm/make_html_report.py
@@ -1442,9 +1442,8 @@ return true;
stat_str = 'No test cases executed'
if total_executed>0:
failed_perct = int(float(total_failed)/float(total_executed)*100)
- stat_str = 'From %d tests executed, '
- '%d have passed (%d%s)' % (total_executed, total_passed,failed_perct,
- '% failures')
+ stat_str = 'From %d tests executed, %d have passed (%d%% failures)' % \
+ (total_executed, total_passed, failed_perct)
kvm_ver_str = metadata['kvmver']
--
1.6.2.2
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [KVM-AUTOTEST PATCH 2/8] RHEL-4.7 step files: fix the initial boot barriers
2009-06-08 4:01 ` [PATCH 2/3] Fixing bad line breaks Lucas Meneghel Rodrigues
@ 2009-06-08 4:01 ` Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [PATCH 3/3] Fix bad logging calls Lucas Meneghel Rodrigues
2009-06-08 15:17 ` [KVM-AUTOTEST PATCH 2/8] RHEL-4.7 step files: fix the initial boot barriers Lucas Meneghel Rodrigues
0 siblings, 2 replies; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 4:01 UTC (permalink / raw)
To: kvm; +Cc: Lucas Meneghel Rodrigues, Michael Goldish
Fix the first barrier to include the 'boot:' prompt. This is crucial
because the guest accepts keyboard input only after this prompt
appears.
Visibility: Small (Data file on the kvm test)
Risk: Low (Small, tested change to a data file)
Signed-off-by: Michael Goldish <mgoldish@redhat.com>
---
client/tests/kvm/steps/RHEL-4.7-i386.steps | 2 +-
client/tests/kvm/steps/RHEL-4.7-x86_64.steps | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/client/tests/kvm/steps/RHEL-4.7-i386.steps b/client/tests/kvm/steps/RHEL-4.7-i386.steps
index 3816c15..763d473 100644
--- a/client/tests/kvm/steps/RHEL-4.7-i386.steps
+++ b/client/tests/kvm/steps/RHEL-4.7-i386.steps
@@ -4,7 +4,7 @@
# --------------------------------
step unknown
screendump 20090413_013526_868fe81019ae64a0b066c4c0d4ebc4e1.ppm
-barrier_2 188 30 354 265 48ef114b5a42ba0d5bebfaee47dce498 50
+barrier_2 44 38 0 363 d9ca61811a10b33cc95515d4796541e7 50
# Sending keys: ret
key ret
# --------------------------------
diff --git a/client/tests/kvm/steps/RHEL-4.7-x86_64.steps b/client/tests/kvm/steps/RHEL-4.7-x86_64.steps
index 644446f..36f0109 100644
--- a/client/tests/kvm/steps/RHEL-4.7-x86_64.steps
+++ b/client/tests/kvm/steps/RHEL-4.7-x86_64.steps
@@ -6,7 +6,7 @@ step 8.84
screendump 20080101_000001_868fe81019ae64a0b066c4c0d4ebc4e1.ppm
# boot options
sleep 5
-barrier_2 194 59 101 59 8c4f6b29e4087e1bed13c40e4a6a904f 44 optional
+barrier_2 44 36 0 365 ea4c08daabe1f982b243fce9c5b542a0 44 optional
# Sending keys: ret
key ret
# --------------------------------
--
1.6.2.2
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH 3/3] Fix bad logging calls
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 2/8] RHEL-4.7 step files: fix the initial boot barriers Lucas Meneghel Rodrigues
@ 2009-06-08 4:01 ` Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 3/8] WinXP step file fixes Lucas Meneghel Rodrigues
2009-06-08 15:17 ` [KVM-AUTOTEST PATCH 2/8] RHEL-4.7 step files: fix the initial boot barriers Lucas Meneghel Rodrigues
1 sibling, 1 reply; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 4:01 UTC (permalink / raw)
To: kvm; +Cc: Lucas Meneghel Rodrigues
During the conversion of kvm autotest to upstream coding standards,
some bad logging calls were left behind. This patch fixes them.
Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
client/tests/kvm/kvm_utils.py | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/client/tests/kvm/kvm_utils.py b/client/tests/kvm/kvm_utils.py
index 434190d..37a1f22 100644
--- a/client/tests/kvm/kvm_utils.py
+++ b/client/tests/kvm/kvm_utils.py
@@ -304,7 +304,7 @@ class kvm_spawn:
# Print some debugging info
if match == None and self.poll() != 0:
- logging.debug("Timeout elapsed or process terminated. Output:",
+ logging.debug("Timeout elapsed or process terminated. Output: %s",
format_str_for_message(data.strip()))
return (match, data)
@@ -465,8 +465,8 @@ class kvm_spawn:
# Print some debugging info
if status != 0:
- logging.debug("Command failed; status: %d, output:" % status \
- + format_str_for_message(output.strip()))
+ logging.debug("Command failed; status: %d, output: %s", status,
+ format_str_for_message(output.strip()))
return (status, output)
--
1.6.2.2
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [KVM-AUTOTEST PATCH 3/8] WinXP step file fixes
2009-06-08 4:01 ` [PATCH 3/3] Fix bad logging calls Lucas Meneghel Rodrigues
@ 2009-06-08 4:01 ` Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 4/8] RHEL 5.3 " Lucas Meneghel Rodrigues
2009-06-08 15:18 ` [KVM-AUTOTEST PATCH 3/8] WinXP " Lucas Meneghel Rodrigues
0 siblings, 2 replies; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 4:01 UTC (permalink / raw)
To: kvm; +Cc: Lucas Meneghel Rodrigues, Michael Goldish
Add an optional barrier to deal with a closed start menu.
Signed-off-by: Michael Goldish <mgoldish@redhat.com>
---
client/tests/kvm/steps/WinXP-32-setupssh.steps | 10 ++++++++--
client/tests/kvm/steps/WinXP-32.steps | 4 +++-
client/tests/kvm/steps/WinXP-64.steps | 3 +--
3 files changed, 12 insertions(+), 5 deletions(-)
diff --git a/client/tests/kvm/steps/WinXP-32-setupssh.steps b/client/tests/kvm/steps/WinXP-32-setupssh.steps
index 729d9df..ebb665f 100644
--- a/client/tests/kvm/steps/WinXP-32-setupssh.steps
+++ b/client/tests/kvm/steps/WinXP-32-setupssh.steps
@@ -4,8 +4,14 @@
# --------------------------------
step 24.72
screendump 20080101_000001_5965948293222a6d6f3e545db40c23c1.ppm
-# open start menu
-barrier_2 125 79 342 270 368b3d82c870dbcdc4dfc2a49660e798 124
+# desktop reached
+barrier_2 36 32 392 292 3828d3a9587b3a9766a567a2b7570e42 124
+# --------------------------------
+step 24.72
+screendump 20080101_000001_5965948293222a6d6f3e545db40c23c1.ppm
+# open start menu if not already open
+sleep 10
+barrier_2 84 48 0 552 082462ce890968a264b9b13cddda8ae3 10 optional
# Sending keys: ctrl-esc
key ctrl-esc
# --------------------------------
diff --git a/client/tests/kvm/steps/WinXP-32.steps b/client/tests/kvm/steps/WinXP-32.steps
index b0c6e35..f52fd0e 100644
--- a/client/tests/kvm/steps/WinXP-32.steps
+++ b/client/tests/kvm/steps/WinXP-32.steps
@@ -136,7 +136,8 @@ key alt-n
step 2251.56
screendump 20080101_000022_dcdc2fe9606c044ce648422afe42e23d.ppm
# User
-barrier_2 409 35 64 188 3d71d4d7a9364c1e6415b3d554ce6e5b 9
+barrier_2 161 37 312 187 a941ecbeb73f9d73e3e9c38da9a4b743 9
+# Sending keys: $user alt-n
var user
key alt-n
# --------------------------------
@@ -154,6 +155,7 @@ barrier_2 48 51 391 288 bbac8a522510d7c8d6e515f6a3fbd4c3 240
step 2279.61
screendump 20090416_150641_b72ad5c48ec2dbc9814d569e38cbb4cc.ppm
# Win XP Start Menu (closed)
+sleep 20
barrier_2 104 41 0 559 a7cc02cecff2cb495f300aefbb99d9ae 5 optional
# Sending keys: ctrl-esc
key ctrl-esc
diff --git a/client/tests/kvm/steps/WinXP-64.steps b/client/tests/kvm/steps/WinXP-64.steps
index 20bac81..91e6d0f 100644
--- a/client/tests/kvm/steps/WinXP-64.steps
+++ b/client/tests/kvm/steps/WinXP-64.steps
@@ -74,7 +74,6 @@ key ret
# --------------------------------
step 286.86
screendump 20080101_000010_bb878343930f948c0346f103a387157a.ppm
-barrier_2 69 15 179 8 93889bdbe5351e61a6d9c7d00bb1c971 10
# --------------------------------
step 409.46
screendump 20080101_000011_30db9777a7883a07e6e65bff74e1d98f.ppm
@@ -100,7 +99,7 @@ key 0xdc
step 978.02
screendump 20080101_000014_213fbe6fa13bf32dfac6a00bf4205e45.ppm
# Windows XP Start Menu Opened
-barrier_2 48 20 274 420 c4a9620d84508013050e5a37a0d9e4ef 15
+barrier_2 129 30 196 72 aae68af7e05e2312c707f2f4bd73f024 15
# Sending keys: u
key u
# --------------------------------
--
1.6.2.2
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [KVM-AUTOTEST PATCH 4/8] RHEL 5.3 step file fixes
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 3/8] WinXP step file fixes Lucas Meneghel Rodrigues
@ 2009-06-08 4:01 ` Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 5/8] stepeditor.py: get rid of some shortcuts Lucas Meneghel Rodrigues
2009-06-08 15:18 ` [KVM-AUTOTEST PATCH 4/8] RHEL 5.3 step file fixes Lucas Meneghel Rodrigues
2009-06-08 15:18 ` [KVM-AUTOTEST PATCH 3/8] WinXP " Lucas Meneghel Rodrigues
1 sibling, 2 replies; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 4:01 UTC (permalink / raw)
To: kvm; +Cc: Lucas Meneghel Rodrigues, Michael Goldish
Fix the initial boot barriers to include the 'boot:' prompt.
Also fix a barrier in 5.3 of a dialog that sometimes appears at an
unexpected location.
Signed-off-by: Michael Goldish <mgoldish@redhat.com>
---
client/tests/kvm/steps/RHEL-5.3-i386.steps | 12 ++++++++++--
client/tests/kvm/steps/RHEL-5.3-x86_64.steps | 2 +-
2 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/client/tests/kvm/steps/RHEL-5.3-i386.steps b/client/tests/kvm/steps/RHEL-5.3-i386.steps
index 86066d8..0964f47 100644
--- a/client/tests/kvm/steps/RHEL-5.3-i386.steps
+++ b/client/tests/kvm/steps/RHEL-5.3-i386.steps
@@ -7,7 +7,7 @@ step 10.33
screendump 20080101_000001_8333460cfad39ef04d6dbbf7d35fdcba.ppm
# boot options
sleep 10
-barrier_2 459 65 5 223 a817e977c049abaaab9a391d3cbeb1ab 52
+barrier_2 44 40 0 410 2f6c4cea4cf5b03bec757893e4982897 52
# Sending keys: ret
key ret
# --------------------------------
@@ -187,10 +187,18 @@ key alt-f
step 1518.52
screendump 20080101_000024_3e641443dd4ea558467e517a4be68517.ppm
# confirm no rhn
-barrier_2 97 18 146 77 a6a767b46d079c6879ebd5aec00cda46 43
+barrier_2 97 18 146 77 a6a767b46d079c6879ebd5aec00cda46 15 optional
# Sending keys: alt-n
key alt-n
# --------------------------------
+step 1518.52
+screendump 20090506_091708_e5436357421502fccb3df89a830fd2f4.ppm
+# confirm no rhn (2)
+barrier_2 60 37 59 15 d94ff141696970e06545ce9854306970 5 optional
+# Sending keys: alt-tab alt-n
+key alt-tab
+key alt-n
+# --------------------------------
step 1526.55
screendump 20080101_000025_e10ec4a12e28baa41af798cbdbf308a1.ppm
# finish update setup
diff --git a/client/tests/kvm/steps/RHEL-5.3-x86_64.steps b/client/tests/kvm/steps/RHEL-5.3-x86_64.steps
index 96ba87e..fc0db20 100644
--- a/client/tests/kvm/steps/RHEL-5.3-x86_64.steps
+++ b/client/tests/kvm/steps/RHEL-5.3-x86_64.steps
@@ -7,7 +7,7 @@ step 9.22
screendump 20080101_000001_8333460cfad39ef04d6dbbf7d35fdcba.ppm
# boot options
sleep 10
-barrier_2 441 88 18 213 881883a4fa307a918f40804c70b9880d 46
+barrier_2 43 39 0 411 3c4a56d04a8ea1c0c5c1edfe10d472a0 46
# Sending keys: ret
key ret
# --------------------------------
--
1.6.2.2
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [KVM-AUTOTEST PATCH 5/8] stepeditor.py: get rid of some shortcuts
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 4/8] RHEL 5.3 " Lucas Meneghel Rodrigues
@ 2009-06-08 4:01 ` Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 6/8] Choose a monitor filename in the constructor of VM class Lucas Meneghel Rodrigues
2009-06-08 15:19 ` [KVM-AUTOTEST PATCH 5/8] stepeditor.py: get rid of some shortcuts Lucas Meneghel Rodrigues
2009-06-08 15:18 ` [KVM-AUTOTEST PATCH 4/8] RHEL 5.3 step file fixes Lucas Meneghel Rodrigues
1 sibling, 2 replies; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 4:01 UTC (permalink / raw)
To: kvm; +Cc: Lucas Meneghel Rodrigues, Michael Goldish
Disable a few keyboard shortcuts that were initially assumed to be useful,
but apparently override the default functionality of the involved keys
(e.g. 'home', 'end', 'delete') regardless of the widget that has the keyboard
focus.
Also make some indentation changes to the UI description.
Signed-off-by: Michael Goldish <mgoldish@redhat.com>
---
client/tests/kvm/stepeditor.py | 63 +++++++++++++++++++---------------------
1 files changed, 30 insertions(+), 33 deletions(-)
diff --git a/client/tests/kvm/stepeditor.py b/client/tests/kvm/stepeditor.py
index f2ef1aa..9669200 100755
--- a/client/tests/kvm/stepeditor.py
+++ b/client/tests/kvm/stepeditor.py
@@ -858,27 +858,27 @@ class StepMakerWindow:
class StepEditor(StepMakerWindow):
ui = '''<ui>
<menubar name="MenuBar">
- <menu action="File">
- <menuitem action="Open"/>
- <separator/>
- <menuitem action="Quit"/>
- </menu>
- <menu action="Edit">
- <menuitem action="CopyStep"/>
- <menuitem action="DeleteStep"/>
- </menu>
- <menu action="Insert">
- <menuitem action="InsertNewBefore"/>
- <menuitem action="InsertNewAfter"/>
- <separator/>
- <menuitem action="InsertStepsBefore"/>
- <menuitem action="InsertStepsAfter"/>
- </menu>
- <menu action="Tools">
- <menuitem action="CleanUp"/>
- </menu>
+ <menu action="File">
+ <menuitem action="Open"/>
+ <separator/>
+ <menuitem action="Quit"/>
+ </menu>
+ <menu action="Edit">
+ <menuitem action="CopyStep"/>
+ <menuitem action="DeleteStep"/>
+ </menu>
+ <menu action="Insert">
+ <menuitem action="InsertNewBefore"/>
+ <menuitem action="InsertNewAfter"/>
+ <separator/>
+ <menuitem action="InsertStepsBefore"/>
+ <menuitem action="InsertStepsAfter"/>
+ </menu>
+ <menu action="Tools">
+ <menuitem action="CleanUp"/>
+ </menu>
</menubar>
- </ui>'''
+</ui>'''
# Constructor
@@ -896,7 +896,7 @@ class StepEditor(StepMakerWindow):
self.window.add_accel_group(accelgroup)
# Create an ActionGroup
- actiongroup = gtk.ActionGroup('UIManagerExample')
+ actiongroup = gtk.ActionGroup('StepEditor')
# Create actions
actiongroup.add_actions([
@@ -904,22 +904,22 @@ class StepEditor(StepMakerWindow):
self.quit),
('Open', gtk.STOCK_OPEN, '_Open', None, 'Open steps file',
self.open_steps_file),
- ('CopyStep', gtk.STOCK_COPY, '_Copy current step...', None,
+ ('CopyStep', gtk.STOCK_COPY, '_Copy current step...', "",
'Copy current step to user specified position', self.copy_step),
- ('DeleteStep', gtk.STOCK_DELETE, '_Delete current step', None,
+ ('DeleteStep', gtk.STOCK_DELETE, '_Delete current step', "",
'Delete current step', self.event_remove_clicked),
- ('InsertNewBefore', gtk.STOCK_ADD, '_New step before current', None,
+ ('InsertNewBefore', gtk.STOCK_ADD, '_New step before current', "",
'Insert new step before current step', self.insert_before),
- ('InsertNewAfter', gtk.STOCK_ADD, 'N_ew step after current', None,
+ ('InsertNewAfter', gtk.STOCK_ADD, 'N_ew step after current', "",
'Insert new step after current step', self.insert_after),
('InsertStepsBefore', gtk.STOCK_ADD, '_Steps before current...',
- None, 'Insert steps (from file) before current step',
+ "", 'Insert steps (from file) before current step',
self.insert_steps_before),
- ('InsertStepsAfter', gtk.STOCK_ADD, 'Steps _after current...',
- None, 'Insert steps (from file) after current step',
+ ('InsertStepsAfter', gtk.STOCK_ADD, 'Steps _after current...', "",
+ 'Insert steps (from file) after current step',
self.insert_steps_after),
- ('CleanUp', gtk.STOCK_DELETE, '_Clean up data directory',
- None, 'Move unused PPM files to a backup directory', self.cleanup),
+ ('CleanUp', gtk.STOCK_DELETE, '_Clean up data directory', "",
+ 'Move unused PPM files to a backup directory', self.cleanup),
('File', None, '_File'),
('Edit', None, '_Edit'),
('Insert', None, '_Insert'),
@@ -939,9 +939,6 @@ class StepEditor(StepMakerWindow):
create_shortcut("Next", self.event_next_clicked, "Page_Down")
create_shortcut("Previous", self.event_prev_clicked, "Page_Up")
- create_shortcut("First", self.event_first_clicked, "Home")
- create_shortcut("Last", self.event_last_clicked, "End")
- create_shortcut("Delete", self.event_remove_clicked, "Delete")
# Add the actiongroup to the uimanager
uimanager.insert_action_group(actiongroup, 0)
--
1.6.2.2
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [KVM-AUTOTEST PATCH 6/8] Choose a monitor filename in the constructor of VM class
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 5/8] stepeditor.py: get rid of some shortcuts Lucas Meneghel Rodrigues
@ 2009-06-08 4:01 ` Lucas Meneghel Rodrigues
2009-06-08 15:19 ` Lucas Meneghel Rodrigues
2009-06-08 15:19 ` [KVM-AUTOTEST PATCH 5/8] stepeditor.py: get rid of some shortcuts Lucas Meneghel Rodrigues
1 sibling, 1 reply; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 4:01 UTC (permalink / raw)
To: kvm; +Cc: Lucas Meneghel Rodrigues, Michael Goldish
Choose a monitor filename in the VM class constructor instead of the
VM.create() method. This will reduce the number of monitor files left
in /tmp, because the constructor is called fewer times than VM.create().
Risk: Low (comprehensible change, just moving a block of code).
Visibility: Small (users of kvm test).
Signed-off-by: Michael Goldish <mgoldish@redhat.com>
---
client/tests/kvm/kvm_vm.py | 21 +++++++++++----------
1 files changed, 11 insertions(+), 10 deletions(-)
diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
index 76b0251..3001648 100644
--- a/client/tests/kvm/kvm_vm.py
+++ b/client/tests/kvm/kvm_vm.py
@@ -115,6 +115,17 @@ class VM:
self.iso_dir = iso_dir
+ # Find available monitor filename
+ while True:
+ # The monitor filename should be unique
+ self.instance = time.strftime("%Y%m%d-%H%M%S-") + \
+ kvm_utils.generate_random_string(4)
+ self.monitor_file_name = os.path.join("/tmp",
+ "monitor-" + self.instance)
+ if not os.path.exists(self.monitor_file_name):
+ break
+
+
def verify_process_identity(self):
"""
Make sure .pid really points to the original qemu process. If .pid
@@ -297,16 +308,6 @@ class VM:
logging.error("Actual MD5 sum differs from expected one")
return False
- # Find available monitor filename
- while True:
- # The monitor filename should be unique
- self.instance = time.strftime("%Y%m%d-%H%M%S-") + \
- kvm_utils.generate_random_string(4)
- self.monitor_file_name = os.path.join("/tmp",
- "monitor-" + self.instance)
- if not os.path.exists(self.monitor_file_name):
- break
-
# Handle port redirections
redir_names = kvm_utils.get_sub_dict_names(params, "redirs")
host_ports = kvm_utils.find_free_ports(5000, 6000, len(redir_names))
--
1.6.2.2
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH 1/8] kvm_config: Allow for "=" in the value of a config parameter
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 1/8] kvm_config: Allow for "=" in the value of a config parameter Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [PATCH 1/3] Make possible to use kvm_config as a standalone program Lucas Meneghel Rodrigues
@ 2009-06-08 15:16 ` Lucas Meneghel Rodrigues
1 sibling, 0 replies; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 15:16 UTC (permalink / raw)
To: kvm; +Cc: David Huff
On Mon, 2009-06-08 at 01:01 -0300, Lucas Meneghel Rodrigues wrote:
> fix modifies kvm_config.split_and_strip so it will only split once per
> line.
Applied.
> example: kernel_args = "ks=floppy console=ttyS0 noacpi"
>
> Signed-off-by: David Huff <dhuff@redhat.com>
> ---
> client/tests/kvm/kvm_config.py | 2 +-
> 1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/client/tests/kvm/kvm_config.py b/client/tests/kvm/kvm_config.py
> index 8b6ab15..40f16f1 100755
> --- a/client/tests/kvm/kvm_config.py
> +++ b/client/tests/kvm/kvm_config.py
> @@ -136,7 +136,7 @@ class config:
> @param str: String that will be processed
> @param sep: Separator that will be used to split the string
> """
> - temp = str.split(sep)
> + temp = str.split(sep, 1)
> for i in range(len(temp)):
> temp[i] = temp[i].strip()
> temp[i] = temp[i].strip("\"\'")
--
Lucas Meneghel Rodrigues
Software Engineer (QE)
Red Hat - Emerging Technologies
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH 2/8] RHEL-4.7 step files: fix the initial boot barriers
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 2/8] RHEL-4.7 step files: fix the initial boot barriers Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [PATCH 3/3] Fix bad logging calls Lucas Meneghel Rodrigues
@ 2009-06-08 15:17 ` Lucas Meneghel Rodrigues
1 sibling, 0 replies; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 15:17 UTC (permalink / raw)
To: kvm, Autotest mailing list; +Cc: Michael Goldish
On Mon, 2009-06-08 at 01:01 -0300, Lucas Meneghel Rodrigues wrote:
> Fix the first barrier to include the 'boot:' prompt. This is crucial
> because the guest accepts keyboard input only after this prompt
> appears.
Applied.
> Visibility: Small (Data file on the kvm test)
> Risk: Low (Small, tested change to a data file)
>
> Signed-off-by: Michael Goldish <mgoldish@redhat.com>
> ---
> client/tests/kvm/steps/RHEL-4.7-i386.steps | 2 +-
> client/tests/kvm/steps/RHEL-4.7-x86_64.steps | 2 +-
> 2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/client/tests/kvm/steps/RHEL-4.7-i386.steps b/client/tests/kvm/steps/RHEL-4.7-i386.steps
> index 3816c15..763d473 100644
> --- a/client/tests/kvm/steps/RHEL-4.7-i386.steps
> +++ b/client/tests/kvm/steps/RHEL-4.7-i386.steps
> @@ -4,7 +4,7 @@
> # --------------------------------
> step unknown
> screendump 20090413_013526_868fe81019ae64a0b066c4c0d4ebc4e1.ppm
> -barrier_2 188 30 354 265 48ef114b5a42ba0d5bebfaee47dce498 50
> +barrier_2 44 38 0 363 d9ca61811a10b33cc95515d4796541e7 50
> # Sending keys: ret
> key ret
> # --------------------------------
> diff --git a/client/tests/kvm/steps/RHEL-4.7-x86_64.steps b/client/tests/kvm/steps/RHEL-4.7-x86_64.steps
> index 644446f..36f0109 100644
> --- a/client/tests/kvm/steps/RHEL-4.7-x86_64.steps
> +++ b/client/tests/kvm/steps/RHEL-4.7-x86_64.steps
> @@ -6,7 +6,7 @@ step 8.84
> screendump 20080101_000001_868fe81019ae64a0b066c4c0d4ebc4e1.ppm
> # boot options
> sleep 5
> -barrier_2 194 59 101 59 8c4f6b29e4087e1bed13c40e4a6a904f 44 optional
> +barrier_2 44 36 0 365 ea4c08daabe1f982b243fce9c5b542a0 44 optional
> # Sending keys: ret
> key ret
> # --------------------------------
--
Lucas Meneghel Rodrigues
Software Engineer (QE)
Red Hat - Emerging Technologies
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH 3/8] WinXP step file fixes
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 3/8] WinXP step file fixes Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 4/8] RHEL 5.3 " Lucas Meneghel Rodrigues
@ 2009-06-08 15:18 ` Lucas Meneghel Rodrigues
1 sibling, 0 replies; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 15:18 UTC (permalink / raw)
To: kvm, Autotest mailing list; +Cc: Michael Goldish
On Mon, 2009-06-08 at 01:01 -0300, Lucas Meneghel Rodrigues wrote:
> Add an optional barrier to deal with a closed start menu.
Applied.
> Signed-off-by: Michael Goldish <mgoldish@redhat.com>
> ---
> client/tests/kvm/steps/WinXP-32-setupssh.steps | 10 ++++++++--
> client/tests/kvm/steps/WinXP-32.steps | 4 +++-
> client/tests/kvm/steps/WinXP-64.steps | 3 +--
> 3 files changed, 12 insertions(+), 5 deletions(-)
>
> diff --git a/client/tests/kvm/steps/WinXP-32-setupssh.steps b/client/tests/kvm/steps/WinXP-32-setupssh.steps
> index 729d9df..ebb665f 100644
> --- a/client/tests/kvm/steps/WinXP-32-setupssh.steps
> +++ b/client/tests/kvm/steps/WinXP-32-setupssh.steps
> @@ -4,8 +4,14 @@
> # --------------------------------
> step 24.72
> screendump 20080101_000001_5965948293222a6d6f3e545db40c23c1.ppm
> -# open start menu
> -barrier_2 125 79 342 270 368b3d82c870dbcdc4dfc2a49660e798 124
> +# desktop reached
> +barrier_2 36 32 392 292 3828d3a9587b3a9766a567a2b7570e42 124
> +# --------------------------------
> +step 24.72
> +screendump 20080101_000001_5965948293222a6d6f3e545db40c23c1.ppm
> +# open start menu if not already open
> +sleep 10
> +barrier_2 84 48 0 552 082462ce890968a264b9b13cddda8ae3 10 optional
> # Sending keys: ctrl-esc
> key ctrl-esc
> # --------------------------------
> diff --git a/client/tests/kvm/steps/WinXP-32.steps b/client/tests/kvm/steps/WinXP-32.steps
> index b0c6e35..f52fd0e 100644
> --- a/client/tests/kvm/steps/WinXP-32.steps
> +++ b/client/tests/kvm/steps/WinXP-32.steps
> @@ -136,7 +136,8 @@ key alt-n
> step 2251.56
> screendump 20080101_000022_dcdc2fe9606c044ce648422afe42e23d.ppm
> # User
> -barrier_2 409 35 64 188 3d71d4d7a9364c1e6415b3d554ce6e5b 9
> +barrier_2 161 37 312 187 a941ecbeb73f9d73e3e9c38da9a4b743 9
> +# Sending keys: $user alt-n
> var user
> key alt-n
> # --------------------------------
> @@ -154,6 +155,7 @@ barrier_2 48 51 391 288 bbac8a522510d7c8d6e515f6a3fbd4c3 240
> step 2279.61
> screendump 20090416_150641_b72ad5c48ec2dbc9814d569e38cbb4cc.ppm
> # Win XP Start Menu (closed)
> +sleep 20
> barrier_2 104 41 0 559 a7cc02cecff2cb495f300aefbb99d9ae 5 optional
> # Sending keys: ctrl-esc
> key ctrl-esc
> diff --git a/client/tests/kvm/steps/WinXP-64.steps b/client/tests/kvm/steps/WinXP-64.steps
> index 20bac81..91e6d0f 100644
> --- a/client/tests/kvm/steps/WinXP-64.steps
> +++ b/client/tests/kvm/steps/WinXP-64.steps
> @@ -74,7 +74,6 @@ key ret
> # --------------------------------
> step 286.86
> screendump 20080101_000010_bb878343930f948c0346f103a387157a.ppm
> -barrier_2 69 15 179 8 93889bdbe5351e61a6d9c7d00bb1c971 10
> # --------------------------------
> step 409.46
> screendump 20080101_000011_30db9777a7883a07e6e65bff74e1d98f.ppm
> @@ -100,7 +99,7 @@ key 0xdc
> step 978.02
> screendump 20080101_000014_213fbe6fa13bf32dfac6a00bf4205e45.ppm
> # Windows XP Start Menu Opened
> -barrier_2 48 20 274 420 c4a9620d84508013050e5a37a0d9e4ef 15
> +barrier_2 129 30 196 72 aae68af7e05e2312c707f2f4bd73f024 15
> # Sending keys: u
> key u
> # --------------------------------
--
Lucas Meneghel Rodrigues
Software Engineer (QE)
Red Hat - Emerging Technologies
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH 4/8] RHEL 5.3 step file fixes
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 4/8] RHEL 5.3 " Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 5/8] stepeditor.py: get rid of some shortcuts Lucas Meneghel Rodrigues
@ 2009-06-08 15:18 ` Lucas Meneghel Rodrigues
1 sibling, 0 replies; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 15:18 UTC (permalink / raw)
To: kvm; +Cc: Michael Goldish
On Mon, 2009-06-08 at 01:01 -0300, Lucas Meneghel Rodrigues wrote:
> Fix the initial boot barriers to include the 'boot:' prompt.
> Also fix a barrier in 5.3 of a dialog that sometimes appears at an
> unexpected location.
Applied.
> Signed-off-by: Michael Goldish <mgoldish@redhat.com>
> ---
> client/tests/kvm/steps/RHEL-5.3-i386.steps | 12 ++++++++++--
> client/tests/kvm/steps/RHEL-5.3-x86_64.steps | 2 +-
> 2 files changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/client/tests/kvm/steps/RHEL-5.3-i386.steps b/client/tests/kvm/steps/RHEL-5.3-i386.steps
> index 86066d8..0964f47 100644
> --- a/client/tests/kvm/steps/RHEL-5.3-i386.steps
> +++ b/client/tests/kvm/steps/RHEL-5.3-i386.steps
> @@ -7,7 +7,7 @@ step 10.33
> screendump 20080101_000001_8333460cfad39ef04d6dbbf7d35fdcba.ppm
> # boot options
> sleep 10
> -barrier_2 459 65 5 223 a817e977c049abaaab9a391d3cbeb1ab 52
> +barrier_2 44 40 0 410 2f6c4cea4cf5b03bec757893e4982897 52
> # Sending keys: ret
> key ret
> # --------------------------------
> @@ -187,10 +187,18 @@ key alt-f
> step 1518.52
> screendump 20080101_000024_3e641443dd4ea558467e517a4be68517.ppm
> # confirm no rhn
> -barrier_2 97 18 146 77 a6a767b46d079c6879ebd5aec00cda46 43
> +barrier_2 97 18 146 77 a6a767b46d079c6879ebd5aec00cda46 15 optional
> # Sending keys: alt-n
> key alt-n
> # --------------------------------
> +step 1518.52
> +screendump 20090506_091708_e5436357421502fccb3df89a830fd2f4.ppm
> +# confirm no rhn (2)
> +barrier_2 60 37 59 15 d94ff141696970e06545ce9854306970 5 optional
> +# Sending keys: alt-tab alt-n
> +key alt-tab
> +key alt-n
> +# --------------------------------
> step 1526.55
> screendump 20080101_000025_e10ec4a12e28baa41af798cbdbf308a1.ppm
> # finish update setup
> diff --git a/client/tests/kvm/steps/RHEL-5.3-x86_64.steps b/client/tests/kvm/steps/RHEL-5.3-x86_64.steps
> index 96ba87e..fc0db20 100644
> --- a/client/tests/kvm/steps/RHEL-5.3-x86_64.steps
> +++ b/client/tests/kvm/steps/RHEL-5.3-x86_64.steps
> @@ -7,7 +7,7 @@ step 9.22
> screendump 20080101_000001_8333460cfad39ef04d6dbbf7d35fdcba.ppm
> # boot options
> sleep 10
> -barrier_2 441 88 18 213 881883a4fa307a918f40804c70b9880d 46
> +barrier_2 43 39 0 411 3c4a56d04a8ea1c0c5c1edfe10d472a0 46
> # Sending keys: ret
> key ret
> # --------------------------------
--
Lucas Meneghel Rodrigues
Software Engineer (QE)
Red Hat - Emerging Technologies
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH 5/8] stepeditor.py: get rid of some shortcuts
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 5/8] stepeditor.py: get rid of some shortcuts Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 6/8] Choose a monitor filename in the constructor of VM class Lucas Meneghel Rodrigues
@ 2009-06-08 15:19 ` Lucas Meneghel Rodrigues
1 sibling, 0 replies; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 15:19 UTC (permalink / raw)
To: kvm, Autotest mailing list; +Cc: Michael Goldish
On Mon, 2009-06-08 at 01:01 -0300, Lucas Meneghel Rodrigues wrote:
> Disable a few keyboard shortcuts that were initially assumed to be useful,
> but apparently override the default functionality of the involved keys
> (e.g. 'home', 'end', 'delete') regardless of the widget that has the keyboard
> focus.
>
> Also make some indentation changes to the UI description.
>
> Signed-off-by: Michael Goldish <mgoldish@redhat.com>
> ---
> client/tests/kvm/stepeditor.py | 63 +++++++++++++++++++---------------------
> 1 files changed, 30 insertions(+), 33 deletions(-)
>
> diff --git a/client/tests/kvm/stepeditor.py b/client/tests/kvm/stepeditor.py
> index f2ef1aa..9669200 100755
> --- a/client/tests/kvm/stepeditor.py
> +++ b/client/tests/kvm/stepeditor.py
> @@ -858,27 +858,27 @@ class StepMakerWindow:
> class StepEditor(StepMakerWindow):
> ui = '''<ui>
> <menubar name="MenuBar">
> - <menu action="File">
> - <menuitem action="Open"/>
> - <separator/>
> - <menuitem action="Quit"/>
> - </menu>
> - <menu action="Edit">
> - <menuitem action="CopyStep"/>
> - <menuitem action="DeleteStep"/>
> - </menu>
> - <menu action="Insert">
> - <menuitem action="InsertNewBefore"/>
> - <menuitem action="InsertNewAfter"/>
> - <separator/>
> - <menuitem action="InsertStepsBefore"/>
> - <menuitem action="InsertStepsAfter"/>
> - </menu>
> - <menu action="Tools">
> - <menuitem action="CleanUp"/>
> - </menu>
> + <menu action="File">
> + <menuitem action="Open"/>
> + <separator/>
> + <menuitem action="Quit"/>
> + </menu>
> + <menu action="Edit">
> + <menuitem action="CopyStep"/>
> + <menuitem action="DeleteStep"/>
> + </menu>
> + <menu action="Insert">
> + <menuitem action="InsertNewBefore"/>
> + <menuitem action="InsertNewAfter"/>
> + <separator/>
> + <menuitem action="InsertStepsBefore"/>
> + <menuitem action="InsertStepsAfter"/>
> + </menu>
> + <menu action="Tools">
> + <menuitem action="CleanUp"/>
> + </menu>
> </menubar>
> - </ui>'''
> +</ui>'''
>
> # Constructor
>
> @@ -896,7 +896,7 @@ class StepEditor(StepMakerWindow):
> self.window.add_accel_group(accelgroup)
>
> # Create an ActionGroup
> - actiongroup = gtk.ActionGroup('UIManagerExample')
> + actiongroup = gtk.ActionGroup('StepEditor')
>
> # Create actions
> actiongroup.add_actions([
> @@ -904,22 +904,22 @@ class StepEditor(StepMakerWindow):
> self.quit),
> ('Open', gtk.STOCK_OPEN, '_Open', None, 'Open steps file',
> self.open_steps_file),
> - ('CopyStep', gtk.STOCK_COPY, '_Copy current step...', None,
> + ('CopyStep', gtk.STOCK_COPY, '_Copy current step...', "",
> 'Copy current step to user specified position', self.copy_step),
> - ('DeleteStep', gtk.STOCK_DELETE, '_Delete current step', None,
> + ('DeleteStep', gtk.STOCK_DELETE, '_Delete current step', "",
> 'Delete current step', self.event_remove_clicked),
> - ('InsertNewBefore', gtk.STOCK_ADD, '_New step before current', None,
> + ('InsertNewBefore', gtk.STOCK_ADD, '_New step before current', "",
> 'Insert new step before current step', self.insert_before),
> - ('InsertNewAfter', gtk.STOCK_ADD, 'N_ew step after current', None,
> + ('InsertNewAfter', gtk.STOCK_ADD, 'N_ew step after current', "",
> 'Insert new step after current step', self.insert_after),
> ('InsertStepsBefore', gtk.STOCK_ADD, '_Steps before current...',
> - None, 'Insert steps (from file) before current step',
> + "", 'Insert steps (from file) before current step',
> self.insert_steps_before),
> - ('InsertStepsAfter', gtk.STOCK_ADD, 'Steps _after current...',
> - None, 'Insert steps (from file) after current step',
> + ('InsertStepsAfter', gtk.STOCK_ADD, 'Steps _after current...', "",
> + 'Insert steps (from file) after current step',
> self.insert_steps_after),
> - ('CleanUp', gtk.STOCK_DELETE, '_Clean up data directory',
> - None, 'Move unused PPM files to a backup directory', self.cleanup),
> + ('CleanUp', gtk.STOCK_DELETE, '_Clean up data directory', "",
> + 'Move unused PPM files to a backup directory', self.cleanup),
> ('File', None, '_File'),
> ('Edit', None, '_Edit'),
> ('Insert', None, '_Insert'),
> @@ -939,9 +939,6 @@ class StepEditor(StepMakerWindow):
>
> create_shortcut("Next", self.event_next_clicked, "Page_Down")
> create_shortcut("Previous", self.event_prev_clicked, "Page_Up")
> - create_shortcut("First", self.event_first_clicked, "Home")
> - create_shortcut("Last", self.event_last_clicked, "End")
> - create_shortcut("Delete", self.event_remove_clicked, "Delete")
>
> # Add the actiongroup to the uimanager
> uimanager.insert_action_group(actiongroup, 0)
--
Lucas Meneghel Rodrigues
Software Engineer (QE)
Red Hat - Emerging Technologies
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH 6/8] Choose a monitor filename in the constructor of VM class
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 6/8] Choose a monitor filename in the constructor of VM class Lucas Meneghel Rodrigues
@ 2009-06-08 15:19 ` Lucas Meneghel Rodrigues
0 siblings, 0 replies; 29+ messages in thread
From: Lucas Meneghel Rodrigues @ 2009-06-08 15:19 UTC (permalink / raw)
To: kvm; +Cc: Michael Goldish
On Mon, 2009-06-08 at 01:01 -0300, Lucas Meneghel Rodrigues wrote:
> Choose a monitor filename in the VM class constructor instead of the
> VM.create() method. This will reduce the number of monitor files left
> in /tmp, because the constructor is called fewer times than VM.create().
Applied.
> Risk: Low (comprehensible change, just moving a block of code).
> Visibility: Small (users of kvm test).
>
> Signed-off-by: Michael Goldish <mgoldish@redhat.com>
> ---
> client/tests/kvm/kvm_vm.py | 21 +++++++++++----------
> 1 files changed, 11 insertions(+), 10 deletions(-)
>
> diff --git a/client/tests/kvm/kvm_vm.py b/client/tests/kvm/kvm_vm.py
> index 76b0251..3001648 100644
> --- a/client/tests/kvm/kvm_vm.py
> +++ b/client/tests/kvm/kvm_vm.py
> @@ -115,6 +115,17 @@ class VM:
> self.iso_dir = iso_dir
>
>
> + # Find available monitor filename
> + while True:
> + # The monitor filename should be unique
> + self.instance = time.strftime("%Y%m%d-%H%M%S-") + \
> + kvm_utils.generate_random_string(4)
> + self.monitor_file_name = os.path.join("/tmp",
> + "monitor-" + self.instance)
> + if not os.path.exists(self.monitor_file_name):
> + break
> +
> +
> def verify_process_identity(self):
> """
> Make sure .pid really points to the original qemu process. If .pid
> @@ -297,16 +308,6 @@ class VM:
> logging.error("Actual MD5 sum differs from expected one")
> return False
>
> - # Find available monitor filename
> - while True:
> - # The monitor filename should be unique
> - self.instance = time.strftime("%Y%m%d-%H%M%S-") + \
> - kvm_utils.generate_random_string(4)
> - self.monitor_file_name = os.path.join("/tmp",
> - "monitor-" + self.instance)
> - if not os.path.exists(self.monitor_file_name):
> - break
> -
> # Handle port redirections
> redir_names = kvm_utils.get_sub_dict_names(params, "redirs")
> host_ports = kvm_utils.find_free_ports(5000, 6000, len(redir_names))
--
Lucas Meneghel Rodrigues
Software Engineer (QE)
Red Hat - Emerging Technologies
^ permalink raw reply [flat|nested] 29+ messages in thread
* [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
2009-06-08 4:01 [KVM-AUTOTEST PATCH 0/8] Re-submitting some of the patches on the patch queue Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 1/8] kvm_config: Allow for "=" in the value of a config parameter Lucas Meneghel Rodrigues
@ 2009-06-09 8:41 ` Yolkfull Chow
2009-06-09 9:37 ` Yaniv Kaul
2009-06-09 12:45 ` Uri Lublin
1 sibling, 2 replies; 29+ messages in thread
From: Yolkfull Chow @ 2009-06-09 8:41 UTC (permalink / raw)
To: kvm; +Cc: Uri Lublin
[-- Attachment #1: Type: text/plain, Size: 156 bytes --]
Hi,
This test will boot VMs until one of them becomes unresponsive, and
records the maximum number of VMs successfully started.
--
Yolkfull
Regards,
[-- Attachment #2: kvm_tests.py.patch --]
[-- Type: text/plain, Size: 2892 bytes --]
diff --git a/client/tests/kvm/kvm_tests.py b/client/tests/kvm/kvm_tests.py
index cccc48e..7d00277 100644
--- a/client/tests/kvm/kvm_tests.py
+++ b/client/tests/kvm/kvm_tests.py
@@ -466,3 +466,70 @@ def run_linux_s3(test, params, env):
logging.info("VM resumed after S3")
session.close()
+
+def run_boot_vms(tests, params, env):
+ """
+ Boots VMs until one of them becomes unresponsive, and records the maximum
+ number of VMs successfully started:
+ 1) boot the first vm
+ 2) boot the second vm cloned from the first vm, check whether it boots up
+ and all booted vms can ssh-login
+ 3) go on until cannot create VM anymore or cannot allocate memory for VM
+
+ @param test: kvm test object
+ @param params: Dictionary with the test parameters
+ @param env: Dictionary with test environment.
+ """
+ # boot the first vm
+ vm1 = kvm_utils.env_get_vm(env, params.get("main_vm"))
+
+ if not vm1:
+ raise error.TestError("VM object not found in environment")
+ if not vm1.is_alive():
+ raise error.TestError("VM seems to be dead; Test requires a living VM")
+
+ logging.info("Waiting for first guest to be up...")
+
+ vm1_session = kvm_utils.wait_for(vm1.ssh_login, 240, 0, 2)
+ if not vm1_session:
+ raise error.TestFail("Could not log into first guest")
+
+ num = 1
+ vms = [vm1]
+ sessions = [vm1_session]
+
+ # boot the VMs
+ while True:
+ try:
+ num += 1
+ vm_name = "vm" + str(num)
+
+ # clone vm according to the first one
+ curr_vm = vm1.clone(vm_name)
+ logging.info(" Booting the %dth guest" % num)
+ if not curr_vm.create():
+ raise error.TestFail("Cannot boot vm anylonger")
+
+ curr_vm_session = kvm_utils.wait_for(curr_vm.ssh_login, 240, 0, 2)
+
+ if not curr_vm_session:
+ curr_vm.send_monitor_cmd("quit")
+ raise error.TestFail("Could not log into %dth guest" % num)
+
+ logging.info(" %dth guest boots up successfully" % num)
+ sessions.append(curr_vm_session)
+ vms.append(curr_vm)
+
+ # check whether all previous ssh sessions are responsive
+ for vm_session in sessions:
+ if not vm_session.is_responsive():
+ logging.error("%dth guest's session is not responsive" \
+ % (sessions.index(vm_session) + 1))
+
+ except (error.TestFail, OSError):
+ for vm in vms:
+ logging.info("Shut down the %dth guest" % (vms.index(vm) + 1))
+ vm.destroy(gracefully = params.get("kill_vm_gracefully") \
+ == "yes")
+ logging.info("Total number booted successfully: %d" % (num - 1))
+ break
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
2009-06-09 8:41 ` [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive Yolkfull Chow
@ 2009-06-09 9:37 ` Yaniv Kaul
2009-06-09 9:57 ` Michael Goldish
2009-06-09 12:45 ` Uri Lublin
1 sibling, 1 reply; 29+ messages in thread
From: Yaniv Kaul @ 2009-06-09 9:37 UTC (permalink / raw)
To: Yolkfull Chow; +Cc: kvm, Uri Lublin
>
> Hi,
>
> This test will boot VMs until one of them becomes unresponsive, and
> records the maximum number of VMs successfully started.
>
>
Can you clarify what this test is exactly testing? Is it any of the
tests on http://kvm.et.redhat.com/page/KVM-Autotest/TODO (if not, please
add it).
Are you expecting OOM? Or some VMs to go into swap? Are the VMs
completely idle, except for responding to SSH?
Are you going to integrate KSM into this?
TIA,
Y.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
[not found] <2021156332.1536421244540393444.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
@ 2009-06-09 9:44 ` Michael Goldish
2009-06-10 8:10 ` Yolkfull Chow
0 siblings, 1 reply; 29+ messages in thread
From: Michael Goldish @ 2009-06-09 9:44 UTC (permalink / raw)
To: Yolkfull Chow; +Cc: Uri Lublin, kvm
The test looks pretty nicely written. Comments:
1. Consider making all the cloned VMs use image snapshots:
curr_vm = vm1.clone()
curr_vm.get_params()["extra_params"] += " -snapshot"
I'm not sure it's a good idea to let all VMs use the same disk image.
Or maybe you shouldn't add -snapshot yourself, but rather do it in the config
file for the first VM, and then all cloned VMs will have -snapshot as well.
2. Consider changing the message
" Booting the %dth guest" % num
to
"Booting guest #%d" % num
(because there's no such thing as 2th and 3th)
3. Consider changing the message
"Cannot boot vm anylonger"
to
"Cannot create VM #%d" % num
4. Why not add curr_vm to vms immediately after cloning it?
That way you can kill it in the exception handler later, without having
to send it a 'quit' if you can't login ('if not curr_vm_session').
5. " %dth guest boots up successfully" % num --> again, 2th and 3th make no sense.
Also, I wonder why you add those spaces before every info message.
6. "%dth guest's session is not responsive" --> same
(maybe use "Guest session #%d is not responsive" % num)
7. "Shut down the %dth guest" --> same
(maybe "Shutting down guest #%d"? or destroying/killing?)
8. Shouldn't we fail the test when we find an unresponsive session?
It seems you just display an error message. You can simply replace
logging.error( with raise error.TestFail(.
9. Consider using a stricter test than just vm_session.is_responsive().
vm_session.is_responsive() just sends ENTER to the sessions and returns
True if it gets anything as a result (usually a prompt, or even just a
newline echoed back). If the session passes this test it is indeed
responsive, so it's a decent test, but maybe you can send some command
(user configurable?) and test for some output. I'm really not sure this
is important, because I can't imagine a session would respond to a newline
but not to other commands, but who knows. Maybe you can send the first VM
a user-specified command when the test begins, remember the output, and
then send all other VMs the same command and make sure the output is the
same.
10. I'm not sure you should use the param "kill_vm_gracefully" because that's
a postprocessor param (probably not your business). You can just call
destroy() in the exception handler with gracefully=False, because if the VMs
are non- responsive, I don't expect them to shutdown nicely with an SSH
command (that's what gracefully does). Also, we're using -snapshot, so
there's no reason to shut them down nicely.
11. "Total number booted successfully: %d" % (num - 1) --> why not just num?
We really have num VMs including the first one.
Or you can say: "Total number booted successfully in addition to the first one"
but that's much longer.
12. Consider adding a 'max_vms' (or 'threshold') user param to the test. If
num reaches 'max_vms', we stop adding VMs and pass the test. Otherwise the
test will always fail (which is depressing). If params.get("threshold") is
None or "", or in short -- 'if not params.get("threshold")', disable this
feature and keep adding VMs forever. The user can enable the feature with:
max_vms = 50
or disable it with:
max_vms =
13. Why are you catching OSError? If you get OSError it might be a framework bug.
14. At the end of the exception handler you should proably re-raise the exception
you caught. Otherwise the user won't see the error message. You can simply replace
'break' with 'raise' (no parameters), and it should work, hopefully.
I know these are quite a few comments, but they're all rather minor and the test
is well written in my opinion.
Thanks,
Michael
----- Original Message -----
From: "Yolkfull Chow" <yzhou@redhat.com>
To: kvm@vger.kernel.org
Cc: "Uri Lublin" <uril@redhat.com>
Sent: Tuesday, June 9, 2009 11:41:54 AM (GMT+0200) Auto-Detected
Subject: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
Hi,
This test will boot VMs until one of them becomes unresponsive, and
records the maximum number of VMs successfully started.
--
Yolkfull
Regards,
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
2009-06-09 9:37 ` Yaniv Kaul
@ 2009-06-09 9:57 ` Michael Goldish
0 siblings, 0 replies; 29+ messages in thread
From: Michael Goldish @ 2009-06-09 9:57 UTC (permalink / raw)
To: Yaniv Kaul; +Cc: kvm, Uri Lublin, Yolkfull Chow
----- "Yaniv Kaul" <ykaul@redhat.com> wrote:
> >
> > Hi,
> >
> > This test will boot VMs until one of them becomes unresponsive, and
>
> > records the maximum number of VMs successfully started.
> >
> >
> Can you clarify what this test is exactly testing? Is it any of the
> tests on http://kvm.et.redhat.com/page/KVM-Autotest/TODO (if not,
> please add it).
The test is in the wiki -- I added it months ago but didn't write it:
'Write a test which adds VMs until one of them becomes unresponsive, and records the maximum number of VMs successfully started. [jasowang]'
> Are you expecting OOM? Or some VMs to go into swap? Are the VMs
> completely idle, except for responding to SSH?
> Are you going to integrate KSM into this?
In my review of the patch I forgot to mention running load on the VMs.
This can be done easily by using 2 sessions per guest (or running in the background of a single session, but I prefer the former), and should be made user configurable via the config file.
I'm not sure about the other things you mentioned -- what should we do about OOM and swap usage? Fail the test? Limit the number of VMs?
And KSM sounds like a good idea, but I'm not sure it should be set up by the framework. Maybe it should be pre-setup on some of the hosts, so eventually some hosts will test with KSM and some without, and the framework can be unaware of that. We can find a way to add that information to the results database (like we currently add the KVM version).
Another option is to write a KSM setup test, like kvm_install, that will either run or not run before all other tests, depending on the control file.
>
>
> TIA,
> Y.
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
2009-06-09 8:41 ` [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive Yolkfull Chow
2009-06-09 9:37 ` Yaniv Kaul
@ 2009-06-09 12:45 ` Uri Lublin
2009-06-10 8:12 ` Yolkfull Chow
1 sibling, 1 reply; 29+ messages in thread
From: Uri Lublin @ 2009-06-09 12:45 UTC (permalink / raw)
To: Yolkfull Chow; +Cc: kvm
On 06/09/2009 11:41 AM, Yolkfull Chow wrote:
>
> Hi,
>
> This test will boot VMs until one of them becomes unresponsive, and
> records the maximum number of VMs successfully started.
>
>
Hello,
Some more comments (in addition to previous comments by others)
1. Do not just send monitor command "quit" but use vm.destroy
* This was mentioned by Michael, but in a different context.
2. Do not destroy main_vm (or vm1). We may want to run other tests on it.
3. You can use enumerate(vms) instead of looking for vm with index.
4. It would be nice to close all ssh sessions too.
Regards,
Uri.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
2009-06-09 9:44 ` Michael Goldish
@ 2009-06-10 8:10 ` Yolkfull Chow
0 siblings, 0 replies; 29+ messages in thread
From: Yolkfull Chow @ 2009-06-10 8:10 UTC (permalink / raw)
To: Michael Goldish; +Cc: Uri Lublin, kvm
On 06/09/2009 05:44 PM, Michael Goldish wrote:
> The test looks pretty nicely written. Comments:
>
> 1. Consider making all the cloned VMs use image snapshots:
>
> curr_vm = vm1.clone()
> curr_vm.get_params()["extra_params"] += " -snapshot"
>
> I'm not sure it's a good idea to let all VMs use the same disk image.
> Or maybe you shouldn't add -snapshot yourself, but rather do it in the config
> file for the first VM, and then all cloned VMs will have -snapshot as well.
>
Yes I use 'image_snapshot = yes' in config file.
> 2. Consider changing the message
> " Booting the %dth guest" % num
> to
> "Booting guest #%d" % num
> (because there's no such thing as 2th and 3th)
>
> 3. Consider changing the message
> "Cannot boot vm anylonger"
> to
> "Cannot create VM #%d" % num
>
> 4. Why not add curr_vm to vms immediately after cloning it?
> That way you can kill it in the exception handler later, without having
> to send it a 'quit' if you can't login ('if not curr_vm_session').
>
Yes, good idea.
> 5. " %dth guest boots up successfully" % num --> again, 2th and 3th make no sense.
> Also, I wonder why you add those spaces before every info message.
>
> 6. "%dth guest's session is not responsive" --> same
> (maybe use "Guest session #%d is not responsive" % num)
>
> 7. "Shut down the %dth guest" --> same
> (maybe "Shutting down guest #%d"? or destroying/killing?)
>
> 8. Shouldn't we fail the test when we find an unresponsive session?
> It seems you just display an error message. You can simply replace
> logging.error( with raise error.TestFail(.
>
> 9. Consider using a stricter test than just vm_session.is_responsive().
> vm_session.is_responsive() just sends ENTER to the sessions and returns
> True if it gets anything as a result (usually a prompt, or even just a
> newline echoed back). If the session passes this test it is indeed
> responsive, so it's a decent test, but maybe you can send some command
> (user configurable?) and test for some output. I'm really not sure this
> is important, because I can't imagine a session would respond to a newline
> but not to other commands, but who knows. Maybe you can send the first VM
> a user-specified command when the test begins, remember the output, and
> then send all other VMs the same command and make sure the output is the
> same.
>
maybe use 'info status' and send command 'help' via session to vms and
compare their output?
> 10. I'm not sure you should use the param "kill_vm_gracefully" because that's
> a postprocessor param (probably not your business). You can just call
> destroy() in the exception handler with gracefully=False, because if the VMs
> are non- responsive, I don't expect them to shutdown nicely with an SSH
> command (that's what gracefully does). Also, we're using -snapshot, so
> there's no reason to shut them down nicely.
>
Yes, I agree. :)
> 11. "Total number booted successfully: %d" % (num - 1) --> why not just num?
> We really have num VMs including the first one.
> Or you can say: "Total number booted successfully in addition to the first one"
> but that's much longer.
>
Since after the first guest booted, I set num = 1 and then 'num += 1'
at first in while loop ( for the purpose of getting a new vm ).
So curr_vm is vm2 ( num is 2) now. If the second vm failed to boot up,
the num booted successfully should be (num - 1).
I would use enumerate(vms) that Uri suggested to make number easier to
count.
> 12. Consider adding a 'max_vms' (or 'threshold') user param to the test. If
> num reaches 'max_vms', we stop adding VMs and pass the test. Otherwise the
> test will always fail (which is depressing). If params.get("threshold") is
> None or "", or in short -- 'if not params.get("threshold")', disable this
> feature and keep adding VMs forever. The user can enable the feature with:
> max_vms = 50
> or disable it with:
> max_vms =
>
This is a good idea for hardware resource limit of host.
> 13. Why are you catching OSError? If you get OSError it might be a framework bug.
>
Since sometimes, vm.create() successfully but failed to ssh-login since
the running python cannot allocate physical memory (OSError).
Add max_vms could fix this problem I think.
> 14. At the end of the exception handler you should proably re-raise the exception
> you caught. Otherwise the user won't see the error message. You can simply replace
> 'break' with 'raise' (no parameters), and it should work, hopefully.
>
Yes I should if add a 'max_vms'.
> I know these are quite a few comments, but they're all rather minor and the test
> is well written in my opinion.
>
Thank you, I will do modification according to your and Uri's comments,
and will re-submit it here later. :)
Thanks and Best Regards,
Yolkfull
> Thanks,
> Michael
>
> ----- Original Message -----
> From: "Yolkfull Chow"<yzhou@redhat.com>
> To:kvm@vger.kernel.org
> Cc: "Uri Lublin"<uril@redhat.com>
> Sent: Tuesday, June 9, 2009 11:41:54 AM (GMT+0200) Auto-Detected
> Subject: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
>
>
> Hi,
>
> This test will boot VMs until one of them becomes unresponsive, and
> records the maximum number of VMs successfully started.
>
>
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
2009-06-09 12:45 ` Uri Lublin
@ 2009-06-10 8:12 ` Yolkfull Chow
0 siblings, 0 replies; 29+ messages in thread
From: Yolkfull Chow @ 2009-06-10 8:12 UTC (permalink / raw)
To: Uri Lublin; +Cc: kvm
On 06/09/2009 08:45 PM, Uri Lublin wrote:
> On 06/09/2009 11:41 AM, Yolkfull Chow wrote:
>>
>> Hi,
>>
>> This test will boot VMs until one of them becomes unresponsive, and
>> records the maximum number of VMs successfully started.
>>
>>
>
> Hello,
>
> Some more comments (in addition to previous comments by others)
> 1. Do not just send monitor command "quit" but use vm.destroy
> * This was mentioned by Michael, but in a different context.
> 2. Do not destroy main_vm (or vm1). We may want to run other tests on it.
> 3. You can use enumerate(vms) instead of looking for vm with index.
> 4. It would be nice to close all ssh sessions too.
OK, I will do modification according to your comments, thank you so much. :)
Best Regards,
Yolkfull
>
> Regards,
> Uri.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
[not found] <219655199.1650051244627445364.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
@ 2009-06-10 10:03 ` Michael Goldish
2009-06-10 10:31 ` Yolkfull Chow
0 siblings, 1 reply; 29+ messages in thread
From: Michael Goldish @ 2009-06-10 10:03 UTC (permalink / raw)
To: Yolkfull Chow; +Cc: Uri Lublin, kvm
----- "Yolkfull Chow" <yzhou@redhat.com> wrote:
> On 06/09/2009 05:44 PM, Michael Goldish wrote:
> > The test looks pretty nicely written. Comments:
> >
> > 1. Consider making all the cloned VMs use image snapshots:
> >
> > curr_vm = vm1.clone()
> > curr_vm.get_params()["extra_params"] += " -snapshot"
> >
> > I'm not sure it's a good idea to let all VMs use the same disk
> image.
> > Or maybe you shouldn't add -snapshot yourself, but rather do it in
> the config
> > file for the first VM, and then all cloned VMs will have -snapshot
> as well.
> >
> Yes I use 'image_snapshot = yes' in config file.
> > 2. Consider changing the message
> > " Booting the %dth guest" % num
> > to
> > "Booting guest #%d" % num
> > (because there's no such thing as 2th and 3th)
> >
> > 3. Consider changing the message
> > "Cannot boot vm anylonger"
> > to
> > "Cannot create VM #%d" % num
> >
> > 4. Why not add curr_vm to vms immediately after cloning it?
> > That way you can kill it in the exception handler later, without
> having
> > to send it a 'quit' if you can't login ('if not curr_vm_session').
> >
> Yes, good idea.
> > 5. " %dth guest boots up successfully" % num --> again, 2th and 3th
> make no sense.
> > Also, I wonder why you add those spaces before every info message.
> >
> > 6. "%dth guest's session is not responsive" --> same
> > (maybe use "Guest session #%d is not responsive" % num)
> >
> > 7. "Shut down the %dth guest" --> same
> > (maybe "Shutting down guest #%d"? or destroying/killing?)
> >
> > 8. Shouldn't we fail the test when we find an unresponsive session?
> > It seems you just display an error message. You can simply replace
> > logging.error( with raise error.TestFail(.
> >
>
> > 9. Consider using a stricter test than just
> vm_session.is_responsive().
> > vm_session.is_responsive() just sends ENTER to the sessions and
> returns
> > True if it gets anything as a result (usually a prompt, or even just
> a
> > newline echoed back). If the session passes this test it is indeed
> > responsive, so it's a decent test, but maybe you can send some
> command
> > (user configurable?) and test for some output. I'm really not sure
> this
> > is important, because I can't imagine a session would respond to a
> newline
> > but not to other commands, but who knows. Maybe you can send the
> first VM
> > a user-specified command when the test begins, remember the output,
> and
> > then send all other VMs the same command and make sure the output is
> the
> > same.
> >
> maybe use 'info status' and send command 'help' via session to vms and
> compare their output?
I'm not sure I understand. What does 'info status' do? We're talking about
an SSH shell, not the monitor. You can do whatever you like, like 'uname -a',
and 'ls /', but you should leave it up to the user to decide, so he/she
can specify different commands for different guests. Linux commands won't
work under Windows, so Linux and Windows must have different commands in
the config file. In the Linux section, under '- @Linux:' you can add
something like:
stress_boot:
stress_boot_test_command = uname -a
and under '- @Windows:':
stress_boot:
stress_boot_test_command = ver && vol
These commands are just naive suggestions. I'm sure someone can think of
much more informative commands.
> > 10. I'm not sure you should use the param "kill_vm_gracefully"
> because that's
> > a postprocessor param (probably not your business). You can just
> call
> > destroy() in the exception handler with gracefully=False, because if
> the VMs
> > are non- responsive, I don't expect them to shutdown nicely with an
> SSH
> > command (that's what gracefully does). Also, we're using -snapshot,
> so
> > there's no reason to shut them down nicely.
> >
> Yes, I agree. :)
> > 11. "Total number booted successfully: %d" % (num - 1) --> why not
> just num?
> > We really have num VMs including the first one.
> > Or you can say: "Total number booted successfully in addition to the
> first one"
> > but that's much longer.
> >
> Since after the first guest booted, I set num = 1 and then 'num += 1'
>
> at first in while loop ( for the purpose of getting a new vm ).
> So curr_vm is vm2 ( num is 2) now. If the second vm failed to boot up,
> the num booted successfully should be (num - 1).
> I would use enumerate(vms) that Uri suggested to make number easier to
> count.
OK, I didn't notice that.
> > 12. Consider adding a 'max_vms' (or 'threshold') user param to the
> test. If
> > num reaches 'max_vms', we stop adding VMs and pass the test.
> Otherwise the
> > test will always fail (which is depressing). If
> params.get("threshold") is
> > None or "", or in short -- 'if not params.get("threshold")', disable
> this
> > feature and keep adding VMs forever. The user can enable the feature
> with:
> > max_vms = 50
> > or disable it with:
> > max_vms =
> >
> This is a good idea for hardware resource limit of host.
> > 13. Why are you catching OSError? If you get OSError it might be a
> framework bug.
> >
> Since sometimes, vm.create() successfully but failed to ssh-login
> since
> the running python cannot allocate physical memory (OSError).
> Add max_vms could fix this problem I think.
Do you remember exactly where OSError was thrown? Do you happen to have
a backtrace? (I just want to be very it's not a bug.)
> > 14. At the end of the exception handler you should proably re-raise
> the exception
> > you caught. Otherwise the user won't see the error message. You can
> simply replace
> > 'break' with 'raise' (no parameters), and it should work,
> hopefully.
> >
> Yes I should if add a 'max_vms'.
I think you should re-raise anyway. Otherwise, what's the point in writing
error messages such as "raise error.TestFail("Cannot boot vm anylonger")"?
I you don't re-raise, the user won't see the messages.
> > I know these are quite a few comments, but they're all rather minor
> and the test
> > is well written in my opinion.
> >
> Thank you, I will do modification according to your and Uri's
> comments,
> and will re-submit it here later. :)
>
> Thanks and Best Regards,
> Yolkfull
> > Thanks,
> > Michael
> >
> > ----- Original Message -----
> > From: "Yolkfull Chow"<yzhou@redhat.com>
> > To:kvm@vger.kernel.org
> > Cc: "Uri Lublin"<uril@redhat.com>
> > Sent: Tuesday, June 9, 2009 11:41:54 AM (GMT+0200) Auto-Detected
> > Subject: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of
> them becomes unresponsive
> >
> >
> > Hi,
> >
> > This test will boot VMs until one of them becomes unresponsive, and
> > records the maximum number of VMs successfully started.
> >
> >
> >
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
2009-06-10 10:03 ` Michael Goldish
@ 2009-06-10 10:31 ` Yolkfull Chow
0 siblings, 0 replies; 29+ messages in thread
From: Yolkfull Chow @ 2009-06-10 10:31 UTC (permalink / raw)
To: Michael Goldish; +Cc: Uri Lublin, kvm
On 06/10/2009 06:03 PM, Michael Goldish wrote:
> ----- "Yolkfull Chow"<yzhou@redhat.com> wrote:
>
>
>> On 06/09/2009 05:44 PM, Michael Goldish wrote:
>>
>>> The test looks pretty nicely written. Comments:
>>>
>>> 1. Consider making all the cloned VMs use image snapshots:
>>>
>>> curr_vm = vm1.clone()
>>> curr_vm.get_params()["extra_params"] += " -snapshot"
>>>
>>> I'm not sure it's a good idea to let all VMs use the same disk
>>>
>> image.
>>
>>> Or maybe you shouldn't add -snapshot yourself, but rather do it in
>>>
>> the config
>>
>>> file for the first VM, and then all cloned VMs will have -snapshot
>>>
>> as well.
>>
>>>
>>>
>> Yes I use 'image_snapshot = yes' in config file.
>>
>>> 2. Consider changing the message
>>> " Booting the %dth guest" % num
>>> to
>>> "Booting guest #%d" % num
>>> (because there's no such thing as 2th and 3th)
>>>
>>> 3. Consider changing the message
>>> "Cannot boot vm anylonger"
>>> to
>>> "Cannot create VM #%d" % num
>>>
>>> 4. Why not add curr_vm to vms immediately after cloning it?
>>> That way you can kill it in the exception handler later, without
>>>
>> having
>>
>>> to send it a 'quit' if you can't login ('if not curr_vm_session').
>>>
>>>
>> Yes, good idea.
>>
>>> 5. " %dth guest boots up successfully" % num --> again, 2th and 3th
>>>
>> make no sense.
>>
>>> Also, I wonder why you add those spaces before every info message.
>>>
>>> 6. "%dth guest's session is not responsive" --> same
>>> (maybe use "Guest session #%d is not responsive" % num)
>>>
>>> 7. "Shut down the %dth guest" --> same
>>> (maybe "Shutting down guest #%d"? or destroying/killing?)
>>>
>>> 8. Shouldn't we fail the test when we find an unresponsive session?
>>> It seems you just display an error message. You can simply replace
>>> logging.error( with raise error.TestFail(.
>>>
>>>
>>
>>> 9. Consider using a stricter test than just
>>>
>> vm_session.is_responsive().
>>
>>> vm_session.is_responsive() just sends ENTER to the sessions and
>>>
>> returns
>>
>>> True if it gets anything as a result (usually a prompt, or even just
>>>
>> a
>>
>>> newline echoed back). If the session passes this test it is indeed
>>> responsive, so it's a decent test, but maybe you can send some
>>>
>> command
>>
>>> (user configurable?) and test for some output. I'm really not sure
>>>
>> this
>>
>>> is important, because I can't imagine a session would respond to a
>>>
>> newline
>>
>>> but not to other commands, but who knows. Maybe you can send the
>>>
>> first VM
>>
>>> a user-specified command when the test begins, remember the output,
>>>
>> and
>>
>>> then send all other VMs the same command and make sure the output is
>>>
>> the
>>
>>> same.
>>>
>>>
>> maybe use 'info status' and send command 'help' via session to vms and
>> compare their output?
>>
> I'm not sure I understand. What does 'info status' do? We're talking about
> an SSH shell, not the monitor. You can do whatever you like, like 'uname -a',
> and 'ls /', but you should leave it up to the user to decide, so he/she
> can specify different commands for different guests. Linux commands won't
> work under Windows, so Linux and Windows must have different commands in
> the config file. In the Linux section, under '- @Linux:' you can add
> something like:
>
> stress_boot:
> stress_boot_test_command = uname -a
>
> and under '- @Windows:':
>
> stress_boot:
> stress_boot_test_command = ver&& vol
>
> These commands are just naive suggestions. I'm sure someone can think of
> much more informative commands.
>
That's really good suggestions. Thanks, Michael. And can I use
'migration_test_command' instead?
>
>>> 10. I'm not sure you should use the param "kill_vm_gracefully"
>>>
>> because that's
>>
>>> a postprocessor param (probably not your business). You can just
>>>
>> call
>>
>>> destroy() in the exception handler with gracefully=False, because if
>>>
>> the VMs
>>
>>> are non- responsive, I don't expect them to shutdown nicely with an
>>>
>> SSH
>>
>>> command (that's what gracefully does). Also, we're using -snapshot,
>>>
>> so
>>
>>> there's no reason to shut them down nicely.
>>>
>>>
>> Yes, I agree. :)
>>
>>> 11. "Total number booted successfully: %d" % (num - 1) --> why not
>>>
>> just num?
>>
>>> We really have num VMs including the first one.
>>> Or you can say: "Total number booted successfully in addition to the
>>>
>> first one"
>>
>>> but that's much longer.
>>>
>>>
>> Since after the first guest booted, I set num = 1 and then 'num += 1'
>>
>> at first in while loop ( for the purpose of getting a new vm ).
>> So curr_vm is vm2 ( num is 2) now. If the second vm failed to boot up,
>> the num booted successfully should be (num - 1).
>> I would use enumerate(vms) that Uri suggested to make number easier to
>> count.
>>
> OK, I didn't notice that.
>
>
>>> 12. Consider adding a 'max_vms' (or 'threshold') user param to the
>>>
>> test. If
>>
>>> num reaches 'max_vms', we stop adding VMs and pass the test.
>>>
>> Otherwise the
>>
>>> test will always fail (which is depressing). If
>>>
>> params.get("threshold") is
>>
>>> None or "", or in short -- 'if not params.get("threshold")', disable
>>>
>> this
>>
>>> feature and keep adding VMs forever. The user can enable the feature
>>>
>> with:
>>
>>> max_vms = 50
>>> or disable it with:
>>> max_vms =
>>>
>>>
>> This is a good idea for hardware resource limit of host.
>>
>>> 13. Why are you catching OSError? If you get OSError it might be a
>>>
>> framework bug.
>>
>>>
>>>
>> Since sometimes, vm.create() successfully but failed to ssh-login
>> since
>> the running python cannot allocate physical memory (OSError).
>> Add max_vms could fix this problem I think.
>>
> Do you remember exactly where OSError was thrown? Do you happen to have
> a backtrace? (I just want to be very it's not a bug.)
>
The OSError was thrown when checking all VMs are responsive and I got
many traceback about "OSError: [Errno 12] Cannot allocate memory".
Maybe since when last VM was created successfully with lucky, whereas
python cannot get physical memory after that when checking all sessions.
So can we now catch the OSError and tell user the number of max_vms is
too large?
>>> 14. At the end of the exception handler you should proably re-raise
>>>
>> the exception
>>
>>> you caught. Otherwise the user won't see the error message. You can
>>>
>> simply replace
>>
>>> 'break' with 'raise' (no parameters), and it should work,
>>>
>> hopefully.
>>
>>>
>>>
>> Yes I should if add a 'max_vms'.
>>
> I think you should re-raise anyway. Otherwise, what's the point in writing
> error messages such as "raise error.TestFail("Cannot boot vm anylonger")"?
> I you don't re-raise, the user won't see the messages.
>
>
>>> I know these are quite a few comments, but they're all rather minor
>>>
>> and the test
>>
>>> is well written in my opinion.
>>>
>>>
>> Thank you, I will do modification according to your and Uri's
>> comments,
>> and will re-submit it here later. :)
>>
>> Thanks and Best Regards,
>> Yolkfull
>>
>>> Thanks,
>>> Michael
>>>
>>> ----- Original Message -----
>>> From: "Yolkfull Chow"<yzhou@redhat.com>
>>> To:kvm@vger.kernel.org
>>> Cc: "Uri Lublin"<uril@redhat.com>
>>> Sent: Tuesday, June 9, 2009 11:41:54 AM (GMT+0200) Auto-Detected
>>> Subject: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of
>>>
>> them becomes unresponsive
>>
>>>
>>> Hi,
>>>
>>> This test will boot VMs until one of them becomes unresponsive, and
>>> records the maximum number of VMs successfully started.
>>>
>>>
>>>
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
--
Yolkfull
Regards,
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
[not found] <443392010.1660281244634434026.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
@ 2009-06-10 11:52 ` Michael Goldish
2009-06-11 3:37 ` Yolkfull Chow
0 siblings, 1 reply; 29+ messages in thread
From: Michael Goldish @ 2009-06-10 11:52 UTC (permalink / raw)
To: Yolkfull Chow; +Cc: Uri Lublin, kvm
----- "Yolkfull Chow" <yzhou@redhat.com> wrote:
> On 06/10/2009 06:03 PM, Michael Goldish wrote:
> > ----- "Yolkfull Chow"<yzhou@redhat.com> wrote:
> >
> >
> >> On 06/09/2009 05:44 PM, Michael Goldish wrote:
> >>
> >>> The test looks pretty nicely written. Comments:
> >>>
> >>> 1. Consider making all the cloned VMs use image snapshots:
> >>>
> >>> curr_vm = vm1.clone()
> >>> curr_vm.get_params()["extra_params"] += " -snapshot"
> >>>
> >>> I'm not sure it's a good idea to let all VMs use the same disk
> >>>
> >> image.
> >>
> >>> Or maybe you shouldn't add -snapshot yourself, but rather do it
> in
> >>>
> >> the config
> >>
> >>> file for the first VM, and then all cloned VMs will have
> -snapshot
> >>>
> >> as well.
> >>
> >>>
> >>>
> >> Yes I use 'image_snapshot = yes' in config file.
> >>
> >>> 2. Consider changing the message
> >>> " Booting the %dth guest" % num
> >>> to
> >>> "Booting guest #%d" % num
> >>> (because there's no such thing as 2th and 3th)
> >>>
> >>> 3. Consider changing the message
> >>> "Cannot boot vm anylonger"
> >>> to
> >>> "Cannot create VM #%d" % num
> >>>
> >>> 4. Why not add curr_vm to vms immediately after cloning it?
> >>> That way you can kill it in the exception handler later, without
> >>>
> >> having
> >>
> >>> to send it a 'quit' if you can't login ('if not
> curr_vm_session').
> >>>
> >>>
> >> Yes, good idea.
> >>
> >>> 5. " %dth guest boots up successfully" % num --> again, 2th and
> 3th
> >>>
> >> make no sense.
> >>
> >>> Also, I wonder why you add those spaces before every info
> message.
> >>>
> >>> 6. "%dth guest's session is not responsive" --> same
> >>> (maybe use "Guest session #%d is not responsive" % num)
> >>>
> >>> 7. "Shut down the %dth guest" --> same
> >>> (maybe "Shutting down guest #%d"? or destroying/killing?)
> >>>
> >>> 8. Shouldn't we fail the test when we find an unresponsive
> session?
> >>> It seems you just display an error message. You can simply
> replace
> >>> logging.error( with raise error.TestFail(.
> >>>
> >>>
> >>
> >>> 9. Consider using a stricter test than just
> >>>
> >> vm_session.is_responsive().
> >>
> >>> vm_session.is_responsive() just sends ENTER to the sessions and
> >>>
> >> returns
> >>
> >>> True if it gets anything as a result (usually a prompt, or even
> just
> >>>
> >> a
> >>
> >>> newline echoed back). If the session passes this test it is
> indeed
> >>> responsive, so it's a decent test, but maybe you can send some
> >>>
> >> command
> >>
> >>> (user configurable?) and test for some output. I'm really not
> sure
> >>>
> >> this
> >>
> >>> is important, because I can't imagine a session would respond to
> a
> >>>
> >> newline
> >>
> >>> but not to other commands, but who knows. Maybe you can send the
> >>>
> >> first VM
> >>
> >>> a user-specified command when the test begins, remember the
> output,
> >>>
> >> and
> >>
> >>> then send all other VMs the same command and make sure the output
> is
> >>>
> >> the
> >>
> >>> same.
> >>>
> >>>
> >> maybe use 'info status' and send command 'help' via session to vms
> and
> >> compare their output?
> >>
> > I'm not sure I understand. What does 'info status' do? We're talking
> about
> > an SSH shell, not the monitor. You can do whatever you like, like
> 'uname -a',
> > and 'ls /', but you should leave it up to the user to decide, so
> he/she
> > can specify different commands for different guests. Linux commands
> won't
> > work under Windows, so Linux and Windows must have different
> commands in
> > the config file. In the Linux section, under '- @Linux:' you can
> add
> > something like:
> >
> > stress_boot:
> > stress_boot_test_command = uname -a
> >
> > and under '- @Windows:':
> >
> > stress_boot:
> > stress_boot_test_command = ver && vol
> >
> > These commands are just naive suggestions. I'm sure someone can
> think of
> > much more informative commands.
> >
> That's really good suggestions. Thanks, Michael. And can I use
> 'migration_test_command' instead?
Not really. Why would you want to use another test's param?
1. There's no guarantee that 'migration_test_command' is defined
for your boot stress test. In fact, it is probably only defined for
migration tests, so you probably won't be able to access it. Try
params.get('migration_test_command') in your test and you'll probably
get None.
2. The user may not want to run migration at all, and then he/she
will probably not define 'migration_test_command'.
3. The user might want to use different test commands for migration
and for the boot stress test.
> >>> 10. I'm not sure you should use the param "kill_vm_gracefully"
> >>>
> >> because that's
> >>
> >>> a postprocessor param (probably not your business). You can just
> >>>
> >> call
> >>
> >>> destroy() in the exception handler with gracefully=False, because
> if
> >>>
> >> the VMs
> >>
> >>> are non- responsive, I don't expect them to shutdown nicely with
> an
> >>>
> >> SSH
> >>
> >>> command (that's what gracefully does). Also, we're using
> -snapshot,
> >>>
> >> so
> >>
> >>> there's no reason to shut them down nicely.
> >>>
> >>>
> >> Yes, I agree. :)
> >>
> >>> 11. "Total number booted successfully: %d" % (num - 1) --> why
> not
> >>>
> >> just num?
> >>
> >>> We really have num VMs including the first one.
> >>> Or you can say: "Total number booted successfully in addition to
> the
> >>>
> >> first one"
> >>
> >>> but that's much longer.
> >>>
> >>>
> >> Since after the first guest booted, I set num = 1 and then 'num +=
> 1'
> >>
> >> at first in while loop ( for the purpose of getting a new vm ).
> >> So curr_vm is vm2 ( num is 2) now. If the second vm failed to boot
> up,
> >> the num booted successfully should be (num - 1).
> >> I would use enumerate(vms) that Uri suggested to make number easier
> to
> >> count.
> >>
> > OK, I didn't notice that.
> >
> >
> >>> 12. Consider adding a 'max_vms' (or 'threshold') user param to
> the
> >>>
> >> test. If
> >>
> >>> num reaches 'max_vms', we stop adding VMs and pass the test.
> >>>
> >> Otherwise the
> >>
> >>> test will always fail (which is depressing). If
> >>>
> >> params.get("threshold") is
> >>
> >>> None or "", or in short -- 'if not params.get("threshold")',
> disable
> >>>
> >> this
> >>
> >>> feature and keep adding VMs forever. The user can enable the
> feature
> >>>
> >> with:
> >>
> >>> max_vms = 50
> >>> or disable it with:
> >>> max_vms =
> >>>
> >>>
> >> This is a good idea for hardware resource limit of host.
> >>
> >>> 13. Why are you catching OSError? If you get OSError it might be
> a
> >>>
> >> framework bug.
> >>
> >>>
> >>>
> >> Since sometimes, vm.create() successfully but failed to ssh-login
> >> since
> >> the running python cannot allocate physical memory (OSError).
> >> Add max_vms could fix this problem I think.
> >>
> > Do you remember exactly where OSError was thrown? Do you happen to
> have
> > a backtrace? (I just want to be very it's not a bug.)
> >
> The OSError was thrown when checking all VMs are responsive and I got
> many traceback about "OSError: [Errno 12] Cannot allocate memory".
> Maybe since when last VM was created successfully with lucky, whereas
> python cannot get physical memory after that when checking all
> sessions.
> So can we now catch the OSError and tell user the number of max_vms
> is too large?
Sure. I was just worried it might be a framework bug. If it's a legitimate
memory error -- catch it and fail the test.
If you happen to catch that OSError again, and get a backtrace, I'd like
to see it if that's possible.
Thanks,
Michael
> >>> 14. At the end of the exception handler you should proably
> re-raise
> >>>
> >> the exception
> >>
> >>> you caught. Otherwise the user won't see the error message. You
> can
> >>>
> >> simply replace
> >>
> >>> 'break' with 'raise' (no parameters), and it should work,
> >>>
> >> hopefully.
> >>
> >>>
> >>>
> >> Yes I should if add a 'max_vms'.
> >>
> > I think you should re-raise anyway. Otherwise, what's the point in
> writing
> > error messages such as "raise error.TestFail("Cannot boot vm
> anylonger")"?
> > I you don't re-raise, the user won't see the messages.
> >
> >
> >>> I know these are quite a few comments, but they're all rather
> minor
> >>>
> >> and the test
> >>
> >>> is well written in my opinion.
> >>>
> >>>
> >> Thank you, I will do modification according to your and Uri's
> >> comments,
> >> and will re-submit it here later. :)
> >>
> >> Thanks and Best Regards,
> >> Yolkfull
> >>
> >>> Thanks,
> >>> Michael
> >>>
> >>> ----- Original Message -----
> >>> From: "Yolkfull Chow"<yzhou@redhat.com>
> >>> To:kvm@vger.kernel.org
> >>> Cc: "Uri Lublin"<uril@redhat.com>
> >>> Sent: Tuesday, June 9, 2009 11:41:54 AM (GMT+0200) Auto-Detected
> >>> Subject: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one
> of
> >>>
> >> them becomes unresponsive
> >>
> >>>
> >>> Hi,
> >>>
> >>> This test will boot VMs until one of them becomes unresponsive,
> and
> >>> records the maximum number of VMs successfully started.
> >>>
> >>>
> >>>
> >>>
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe kvm" in
> >> the body of a message to majordomo@vger.kernel.org
> >> More majordomo info at http://vger.kernel.org/majordomo-info.html
> >>
>
>
> --
> Yolkfull
> Regards,
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
2009-06-10 11:52 ` Michael Goldish
@ 2009-06-11 3:37 ` Yolkfull Chow
0 siblings, 0 replies; 29+ messages in thread
From: Yolkfull Chow @ 2009-06-11 3:37 UTC (permalink / raw)
To: Michael Goldish; +Cc: Uri Lublin, kvm
On 06/10/2009 07:52 PM, Michael Goldish wrote:
> ----- "Yolkfull Chow"<yzhou@redhat.com> wrote:
>
>
>> On 06/10/2009 06:03 PM, Michael Goldish wrote:
>>
>>> ----- "Yolkfull Chow"<yzhou@redhat.com> wrote:
>>>
>>>
>>>
>>>> On 06/09/2009 05:44 PM, Michael Goldish wrote:
>>>>
>>>>
>>>>> The test looks pretty nicely written. Comments:
>>>>>
>>>>> 1. Consider making all the cloned VMs use image snapshots:
>>>>>
>>>>> curr_vm = vm1.clone()
>>>>> curr_vm.get_params()["extra_params"] += " -snapshot"
>>>>>
>>>>> I'm not sure it's a good idea to let all VMs use the same disk
>>>>>
>>>>>
>>>> image.
>>>>
>>>>
>>>>> Or maybe you shouldn't add -snapshot yourself, but rather do it
>>>>>
>> in
>>
>>>>>
>>>>>
>>>> the config
>>>>
>>>>
>>>>> file for the first VM, and then all cloned VMs will have
>>>>>
>> -snapshot
>>
>>>>>
>>>>>
>>>> as well.
>>>>
>>>>
>>>>>
>>>>>
>>>> Yes I use 'image_snapshot = yes' in config file.
>>>>
>>>>
>>>>> 2. Consider changing the message
>>>>> " Booting the %dth guest" % num
>>>>> to
>>>>> "Booting guest #%d" % num
>>>>> (because there's no such thing as 2th and 3th)
>>>>>
>>>>> 3. Consider changing the message
>>>>> "Cannot boot vm anylonger"
>>>>> to
>>>>> "Cannot create VM #%d" % num
>>>>>
>>>>> 4. Why not add curr_vm to vms immediately after cloning it?
>>>>> That way you can kill it in the exception handler later, without
>>>>>
>>>>>
>>>> having
>>>>
>>>>
>>>>> to send it a 'quit' if you can't login ('if not
>>>>>
>> curr_vm_session').
>>
>>>>>
>>>>>
>>>> Yes, good idea.
>>>>
>>>>
>>>>> 5. " %dth guest boots up successfully" % num --> again, 2th and
>>>>>
>> 3th
>>
>>>>>
>>>>>
>>>> make no sense.
>>>>
>>>>
>>>>> Also, I wonder why you add those spaces before every info
>>>>>
>> message.
>>
>>>>> 6. "%dth guest's session is not responsive" --> same
>>>>> (maybe use "Guest session #%d is not responsive" % num)
>>>>>
>>>>> 7. "Shut down the %dth guest" --> same
>>>>> (maybe "Shutting down guest #%d"? or destroying/killing?)
>>>>>
>>>>> 8. Shouldn't we fail the test when we find an unresponsive
>>>>>
>> session?
>>
>>>>> It seems you just display an error message. You can simply
>>>>>
>> replace
>>
>>>>> logging.error( with raise error.TestFail(.
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>> 9. Consider using a stricter test than just
>>>>>
>>>>>
>>>> vm_session.is_responsive().
>>>>
>>>>
>>>>> vm_session.is_responsive() just sends ENTER to the sessions and
>>>>>
>>>>>
>>>> returns
>>>>
>>>>
>>>>> True if it gets anything as a result (usually a prompt, or even
>>>>>
>> just
>>
>>>>>
>>>>>
>>>> a
>>>>
>>>>
>>>>> newline echoed back). If the session passes this test it is
>>>>>
>> indeed
>>
>>>>> responsive, so it's a decent test, but maybe you can send some
>>>>>
>>>>>
>>>> command
>>>>
>>>>
>>>>> (user configurable?) and test for some output. I'm really not
>>>>>
>> sure
>>
>>>>>
>>>>>
>>>> this
>>>>
>>>>
>>>>> is important, because I can't imagine a session would respond to
>>>>>
>> a
>>
>>>>>
>>>>>
>>>> newline
>>>>
>>>>
>>>>> but not to other commands, but who knows. Maybe you can send the
>>>>>
>>>>>
>>>> first VM
>>>>
>>>>
>>>>> a user-specified command when the test begins, remember the
>>>>>
>> output,
>>
>>>>>
>>>>>
>>>> and
>>>>
>>>>
>>>>> then send all other VMs the same command and make sure the output
>>>>>
>> is
>>
>>>>>
>>>>>
>>>> the
>>>>
>>>>
>>>>> same.
>>>>>
>>>>>
>>>>>
>>>> maybe use 'info status' and send command 'help' via session to vms
>>>>
>> and
>>
>>>> compare their output?
>>>>
>>>>
>>> I'm not sure I understand. What does 'info status' do? We're talking
>>>
>> about
>>
>>> an SSH shell, not the monitor. You can do whatever you like, like
>>>
>> 'uname -a',
>>
>>> and 'ls /', but you should leave it up to the user to decide, so
>>>
>> he/she
>>
>>> can specify different commands for different guests. Linux commands
>>>
>> won't
>>
>>> work under Windows, so Linux and Windows must have different
>>>
>> commands in
>>
>>> the config file. In the Linux section, under '- @Linux:' you can
>>>
>> add
>>
>>> something like:
>>>
>>> stress_boot:
>>> stress_boot_test_command = uname -a
>>>
>>> and under '- @Windows:':
>>>
>>> stress_boot:
>>> stress_boot_test_command = ver&& vol
>>>
>>> These commands are just naive suggestions. I'm sure someone can
>>>
>> think of
>>
>>> much more informative commands.
>>>
>>>
>> That's really good suggestions. Thanks, Michael. And can I use
>> 'migration_test_command' instead?
>>
> Not really. Why would you want to use another test's param?
>
> 1. There's no guarantee that 'migration_test_command' is defined
> for your boot stress test. In fact, it is probably only defined for
> migration tests, so you probably won't be able to access it. Try
> params.get('migration_test_command') in your test and you'll probably
> get None.
>
> 2. The user may not want to run migration at all, and then he/she
> will probably not define 'migration_test_command'.
>
> 3. The user might want to use different test commands for migration
> and for the boot stress test.
>
>
>>>>> 10. I'm not sure you should use the param "kill_vm_gracefully"
>>>>>
>>>>>
>>>> because that's
>>>>
>>>>
>>>>> a postprocessor param (probably not your business). You can just
>>>>>
>>>>>
>>>> call
>>>>
>>>>
>>>>> destroy() in the exception handler with gracefully=False, because
>>>>>
>> if
>>
>>>>>
>>>>>
>>>> the VMs
>>>>
>>>>
>>>>> are non- responsive, I don't expect them to shutdown nicely with
>>>>>
>> an
>>
>>>>>
>>>>>
>>>> SSH
>>>>
>>>>
>>>>> command (that's what gracefully does). Also, we're using
>>>>>
>> -snapshot,
>>
>>>>>
>>>>>
>>>> so
>>>>
>>>>
>>>>> there's no reason to shut them down nicely.
>>>>>
>>>>>
>>>>>
>>>> Yes, I agree. :)
>>>>
>>>>
>>>>> 11. "Total number booted successfully: %d" % (num - 1) --> why
>>>>>
>> not
>>
>>>>>
>>>>>
>>>> just num?
>>>>
>>>>
>>>>> We really have num VMs including the first one.
>>>>> Or you can say: "Total number booted successfully in addition to
>>>>>
>> the
>>
>>>>>
>>>>>
>>>> first one"
>>>>
>>>>
>>>>> but that's much longer.
>>>>>
>>>>>
>>>>>
>>>> Since after the first guest booted, I set num = 1 and then 'num +=
>>>>
>> 1'
>>
>>>> at first in while loop ( for the purpose of getting a new vm ).
>>>> So curr_vm is vm2 ( num is 2) now. If the second vm failed to boot
>>>>
>> up,
>>
>>>> the num booted successfully should be (num - 1).
>>>> I would use enumerate(vms) that Uri suggested to make number easier
>>>>
>> to
>>
>>>> count.
>>>>
>>>>
>>> OK, I didn't notice that.
>>>
>>>
>>>
>>>>> 12. Consider adding a 'max_vms' (or 'threshold') user param to
>>>>>
>> the
>>
>>>>>
>>>>>
>>>> test. If
>>>>
>>>>
>>>>> num reaches 'max_vms', we stop adding VMs and pass the test.
>>>>>
>>>>>
>>>> Otherwise the
>>>>
>>>>
>>>>> test will always fail (which is depressing). If
>>>>>
>>>>>
>>>> params.get("threshold") is
>>>>
>>>>
>>>>> None or "", or in short -- 'if not params.get("threshold")',
>>>>>
>> disable
>>
>>>>>
>>>>>
>>>> this
>>>>
>>>>
>>>>> feature and keep adding VMs forever. The user can enable the
>>>>>
>> feature
>>
>>>>>
>>>>>
>>>> with:
>>>>
>>>>
>>>>> max_vms = 50
>>>>> or disable it with:
>>>>> max_vms =
>>>>>
>>>>>
>>>>>
>>>> This is a good idea for hardware resource limit of host.
>>>>
>>>>
>>>>> 13. Why are you catching OSError? If you get OSError it might be
>>>>>
>> a
>>
>>>>>
>>>>>
>>>> framework bug.
>>>>
>>>>
>>>>>
>>>>>
>>>> Since sometimes, vm.create() successfully but failed to ssh-login
>>>> since
>>>> the running python cannot allocate physical memory (OSError).
>>>> Add max_vms could fix this problem I think.
>>>>
>>>>
>>> Do you remember exactly where OSError was thrown? Do you happen to
>>>
>> have
>>
>>> a backtrace? (I just want to be very it's not a bug.)
>>>
>>>
>> The OSError was thrown when checking all VMs are responsive and I got
>> many traceback about "OSError: [Errno 12] Cannot allocate memory".
>> Maybe since when last VM was created successfully with lucky, whereas
>> python cannot get physical memory after that when checking all
>> sessions.
>> So can we now catch the OSError and tell user the number of max_vms
>> is too large?
>>
> Sure. I was just worried it might be a framework bug. If it's a legitimate
> memory error -- catch it and fail the test.
>
> If you happen to catch that OSError again, and get a backtrace, I'd like
> to see it if that's possible.
>
Michael, these are the backtrace messages:
...
20090611-064959
no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
ERROR: run_once: Test failed: [Errno 12] Cannot allocate memory
20090611-064959
no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
DEBUG: run_once: Postprocessing on error...
20090611-065000
no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
DEBUG: postprocess_vm: Postprocessing VM 'vm1'...
20090611-065000
no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
DEBUG: postprocess_vm: VM object found in environment
20090611-065000
no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
DEBUG: send_monitor_cmd: Sending monitor command: screendump
/kvm-autotest/client/results/default/kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>/debug/post_vm1.ppm
20090611-065000
no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
DEBUG: run_once: Contents of environment: {'vm__vm1': <kvm_vm.VM
instance at 0x92999a28>}
post-test sysinfo error:
Traceback (most recent call last):
File "/kvm-autotest/client/common_lib/log.py", line 58, in decorated_func
fn(*args, **dargs)
File "/kvm-autotest/client/bin/base_sysinfo.py", line 213, in
log_after_each_test
log.run(test_sysinfodir)
File "/kvm-autotest/client/bin/base_sysinfo.py", line 112, in run
shell=True, env=env)
File "/usr/lib64/python2.4/subprocess.py", line 412, in call
return Popen(*args, **kwargs).wait()
File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
errread, errwrite)
File "/usr/lib64/python2.4/subprocess.py", line 902, in _execute_child
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
2009-06-11 06:50:02,859 Configuring logger for client level
FAIL
kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
timestamp=1244717402 localtime=Jun 11 06:50:02 Unhandled OSError:
[Errno 12] Cannot allocate memory
Traceback (most recent call last):
File "/kvm-autotest/client/common_lib/test.py", line 304,
in _exec
self.execute(*p_args, **p_dargs)
File "/kvm-autotest/client/common_lib/test.py", line 187,
in execute
self.run_once(*args, **dargs)
File
"/kvm-autotest/client/tests/kvm_runtest_2/kvm_runtest_2.py", line 145,
in run_once
routine_obj.routine(self, params, env)
File
"/kvm-autotest/client/tests/kvm_runtest_2/kvm_tests.py", line 3071, in
run_boot_vms
curr_vm_session = kvm_utils.wait_for(curr_vm.ssh_login,
240, 0, 2)
File
"/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 797, in
wait_for
output = func()
File "/kvm-autotest/client/tests/kvm_runtest_2/kvm_vm.py",
line 728, in ssh_login
session = kvm_utils.ssh(address, port, username,
password, prompt, timeout)
File
"/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 553, in ssh
return remote_login(command, password, prompt, "\n", timeout)
File
"/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 431, in
remote_login
sub = kvm_spawn(command, linesep)
File
"/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 114, in
__init__
(pid, fd) = pty.fork()
File "/usr/lib64/python2.4/pty.py", line 108, in fork
pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
Persistent state variable __group_level now set to 1
END FAIL
kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
timestamp=1244717403 localtime=Jun 11 06:50:03
Dropping caches
2009-06-11 06:50:03,409 running: sync
JOB ERROR: Unhandled OSError: [Errno 12] Cannot allocate memory
Traceback (most recent call last):
File "/kvm-autotest/client/bin/job.py", line 978, in step_engine
execfile(self.control, global_control_vars, global_control_vars)
File "/kvm-autotest/client/control", line 1030, in ?
cfg_to_test("kvm_tests.cfg")
File "/kvm-autotest/client/control", line 1013, in cfg_to_test
current_status = job.run_test("kvm_runtest_2", params=dict,
tag=tagname)
File "/kvm-autotest/client/bin/job.py", line 44, in wrapped
utils.drop_caches()
File "/kvm-autotest/client/bin/base_utils.py", line 638, in drop_caches
utils.system("sync")
File "/kvm-autotest/client/common_lib/utils.py", line 510, in system
stdout_tee=sys.stdout, stderr_tee=sys.stderr).exit_status
File "/kvm-autotest/client/common_lib/utils.py", line 330, in run
bg_job = join_bg_jobs(
File "/kvm-autotest/client/common_lib/utils.py", line 37, in __init__
stdin=stdin)
File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
errread, errwrite)
File "/usr/lib64/python2.4/subprocess.py", line 902, in _execute_child
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
Persistent state variable __group_level now set to 0
END ABORT ---- ---- timestamp=1244717418 localtime=Jun 11
06:50:18 Unhandled OSError: [Errno 12] Cannot allocate memory
Traceback (most recent call last):
File "/kvm-autotest/client/bin/job.py", line 978, in step_engine
execfile(self.control, global_control_vars, global_control_vars)
File "/kvm-autotest/client/control", line 1030, in ?
cfg_to_test("kvm_tests.cfg")
File "/kvm-autotest/client/control", line 1013, in cfg_to_test
current_status = job.run_test("kvm_runtest_2", params=dict,
tag=tagname)
File "/kvm-autotest/client/bin/job.py", line 44, in wrapped
utils.drop_caches()
File "/kvm-autotest/client/bin/base_utils.py", line 638, in drop_caches
utils.system("sync")
File "/kvm-autotest/client/common_lib/utils.py", line 510, in system
stdout_tee=sys.stdout, stderr_tee=sys.stderr).exit_status
File "/kvm-autotest/client/common_lib/utils.py", line 330, in run
bg_job = join_bg_jobs(
File "/kvm-autotest/client/common_lib/utils.py", line 37, in __init__
stdin=stdin)
File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
errread, errwrite)
File "/usr/lib64/python2.4/subprocess.py", line 902, in _execute_child
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
[root@dhcp-66-70-9 kvm_runtest_2]#
> Thanks,
> Michael
>
>
>>>>> 14. At the end of the exception handler you should proably
>>>>>
>> re-raise
>>
>>>>>
>>>>>
>>>> the exception
>>>>
>>>>
>>>>> you caught. Otherwise the user won't see the error message. You
>>>>>
>> can
>>
>>>>>
>>>>>
>>>> simply replace
>>>>
>>>>
>>>>> 'break' with 'raise' (no parameters), and it should work,
>>>>>
>>>>>
>>>> hopefully.
>>>>
>>>>
>>>>>
>>>>>
>>>> Yes I should if add a 'max_vms'.
>>>>
>>>>
>>> I think you should re-raise anyway. Otherwise, what's the point in
>>>
>> writing
>>
>>> error messages such as "raise error.TestFail("Cannot boot vm
>>>
>> anylonger")"?
>>
>>> I you don't re-raise, the user won't see the messages.
>>>
>>>
>>>
>>>>> I know these are quite a few comments, but they're all rather
>>>>>
>> minor
>>
>>>>>
>>>>>
>>>> and the test
>>>>
>>>>
>>>>> is well written in my opinion.
>>>>>
>>>>>
>>>>>
>>>> Thank you, I will do modification according to your and Uri's
>>>> comments,
>>>> and will re-submit it here later. :)
>>>>
>>>> Thanks and Best Regards,
>>>> Yolkfull
>>>>
>>>>
>>>>> Thanks,
>>>>> Michael
>>>>>
>>>>> ----- Original Message -----
>>>>> From: "Yolkfull Chow"<yzhou@redhat.com>
>>>>> To:kvm@vger.kernel.org
>>>>> Cc: "Uri Lublin"<uril@redhat.com>
>>>>> Sent: Tuesday, June 9, 2009 11:41:54 AM (GMT+0200) Auto-Detected
>>>>> Subject: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one
>>>>>
>> of
>>
>>>>>
>>>>>
>>>> them becomes unresponsive
>>>>
>>>>
>>>>> Hi,
>>>>>
>>>>> This test will boot VMs until one of them becomes unresponsive,
>>>>>
>> and
>>
>>>>> records the maximum number of VMs successfully started.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>>
>>>>
>>
>> --
>> Yolkfull
>> Regards,
>>
--
Yolkfull
Regards,
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
[not found] <120253480.1747631244710010660.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
@ 2009-06-11 8:53 ` Michael Goldish
2009-06-11 9:46 ` Yolkfull Chow
0 siblings, 1 reply; 29+ messages in thread
From: Michael Goldish @ 2009-06-11 8:53 UTC (permalink / raw)
To: Yolkfull Chow; +Cc: Uri Lublin, kvm
----- "Yolkfull Chow" <yzhou@redhat.com> wrote:
> Michael, these are the backtrace messages:
>
> ...
> 20090611-064959
> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>
> ERROR: run_once: Test failed: [Errno 12] Cannot allocate memory
> 20090611-064959
> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>
> DEBUG: run_once: Postprocessing on error...
> 20090611-065000
> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>
> DEBUG: postprocess_vm: Postprocessing VM 'vm1'...
> 20090611-065000
> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>
> DEBUG: postprocess_vm: VM object found in environment
> 20090611-065000
> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>
> DEBUG: send_monitor_cmd: Sending monitor command: screendump
> /kvm-autotest/client/results/default/kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>/debug/post_vm1.ppm
> 20090611-065000
> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>
> DEBUG: run_once: Contents of environment: {'vm__vm1': <kvm_vm.VM
> instance at 0x92999a28>}
> post-test sysinfo error:
> Traceback (most recent call last):
> File "/kvm-autotest/client/common_lib/log.py", line 58, in
> decorated_func
> fn(*args, **dargs)
> File "/kvm-autotest/client/bin/base_sysinfo.py", line 213, in
> log_after_each_test
> log.run(test_sysinfodir)
> File "/kvm-autotest/client/bin/base_sysinfo.py", line 112, in run
> shell=True, env=env)
> File "/usr/lib64/python2.4/subprocess.py", line 412, in call
> return Popen(*args, **kwargs).wait()
> File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
> errread, errwrite)
> File "/usr/lib64/python2.4/subprocess.py", line 902, in
> _execute_child
> self.pid = os.fork()
> OSError: [Errno 12] Cannot allocate memory
> 2009-06-11 06:50:02,859 Configuring logger for client level
> FAIL
> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>
> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>
> timestamp=1244717402 localtime=Jun 11 06:50:02 Unhandled
> OSError:
> [Errno 12] Cannot allocate memory
> Traceback (most recent call last):
> File "/kvm-autotest/client/common_lib/test.py", line 304,
>
> in _exec
> self.execute(*p_args, **p_dargs)
> File "/kvm-autotest/client/common_lib/test.py", line 187,
>
> in execute
> self.run_once(*args, **dargs)
> File
> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_runtest_2.py", line 145,
>
> in run_once
> routine_obj.routine(self, params, env)
> File
> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_tests.py", line 3071, in
>
> run_boot_vms
> curr_vm_session = kvm_utils.wait_for(curr_vm.ssh_login,
>
> 240, 0, 2)
> File
> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 797, in
>
> wait_for
> output = func()
> File
> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_vm.py",
> line 728, in ssh_login
> session = kvm_utils.ssh(address, port, username,
> password, prompt, timeout)
> File
> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 553, in
> ssh
> return remote_login(command, password, prompt, "\n",
> timeout)
> File
> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 431, in
>
> remote_login
> sub = kvm_spawn(command, linesep)
> File
> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 114, in
>
> __init__
> (pid, fd) = pty.fork()
> File "/usr/lib64/python2.4/pty.py", line 108, in fork
> pid = os.fork()
> OSError: [Errno 12] Cannot allocate memory
> Persistent state variable __group_level now set to 1
> END FAIL
> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>
> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>
> timestamp=1244717403 localtime=Jun 11 06:50:03
> Dropping caches
> 2009-06-11 06:50:03,409 running: sync
> JOB ERROR: Unhandled OSError: [Errno 12] Cannot allocate memory
> Traceback (most recent call last):
> File "/kvm-autotest/client/bin/job.py", line 978, in step_engine
> execfile(self.control, global_control_vars, global_control_vars)
> File "/kvm-autotest/client/control", line 1030, in ?
> cfg_to_test("kvm_tests.cfg")
> File "/kvm-autotest/client/control", line 1013, in cfg_to_test
> current_status = job.run_test("kvm_runtest_2", params=dict,
> tag=tagname)
> File "/kvm-autotest/client/bin/job.py", line 44, in wrapped
> utils.drop_caches()
> File "/kvm-autotest/client/bin/base_utils.py", line 638, in
> drop_caches
> utils.system("sync")
> File "/kvm-autotest/client/common_lib/utils.py", line 510, in
> system
> stdout_tee=sys.stdout, stderr_tee=sys.stderr).exit_status
> File "/kvm-autotest/client/common_lib/utils.py", line 330, in run
> bg_job = join_bg_jobs(
> File "/kvm-autotest/client/common_lib/utils.py", line 37, in
> __init__
> stdin=stdin)
> File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
> errread, errwrite)
> File "/usr/lib64/python2.4/subprocess.py", line 902, in
> _execute_child
> self.pid = os.fork()
> OSError: [Errno 12] Cannot allocate memory
>
> Persistent state variable __group_level now set to 0
> END ABORT ---- ---- timestamp=1244717418 localtime=Jun 11
>
> 06:50:18 Unhandled OSError: [Errno 12] Cannot allocate memory
> Traceback (most recent call last):
> File "/kvm-autotest/client/bin/job.py", line 978, in step_engine
> execfile(self.control, global_control_vars,
> global_control_vars)
> File "/kvm-autotest/client/control", line 1030, in ?
> cfg_to_test("kvm_tests.cfg")
> File "/kvm-autotest/client/control", line 1013, in cfg_to_test
> current_status = job.run_test("kvm_runtest_2", params=dict,
> tag=tagname)
> File "/kvm-autotest/client/bin/job.py", line 44, in wrapped
> utils.drop_caches()
> File "/kvm-autotest/client/bin/base_utils.py", line 638, in
> drop_caches
> utils.system("sync")
> File "/kvm-autotest/client/common_lib/utils.py", line 510, in
> system
> stdout_tee=sys.stdout, stderr_tee=sys.stderr).exit_status
> File "/kvm-autotest/client/common_lib/utils.py", line 330, in
> run
> bg_job = join_bg_jobs(
> File "/kvm-autotest/client/common_lib/utils.py", line 37, in
> __init__
> stdin=stdin)
> File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
> errread, errwrite)
> File "/usr/lib64/python2.4/subprocess.py", line 902, in
> _execute_child
> self.pid = os.fork()
> OSError: [Errno 12] Cannot allocate memory
> [root@dhcp-66-70-9 kvm_runtest_2]#
Thanks. It does indeed look like a legitimate OSError in os.fork().
BTW, do you have any idea why the result dir has such a weird name?
/kvm-autotest/client/results/default/kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>/debug/post_vm1.ppm
And why sometimes a normal looking tag appears (in the log messages):
no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024
Why all the [] and <> in the weird version? Did you somehow do that intentionally, or is it some sort of bug?
And why is 'None' there? The tag is supposed to be the test's 'shortname', which is determined by kvm_config.py
as it parses kvm_tests.cfg (or the config file you're using).
Normally the result dir should just be kvm_runtest_2.shortname, and in this case:
kvm_runtest_2.no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive
2009-06-11 8:53 ` Michael Goldish
@ 2009-06-11 9:46 ` Yolkfull Chow
0 siblings, 0 replies; 29+ messages in thread
From: Yolkfull Chow @ 2009-06-11 9:46 UTC (permalink / raw)
To: Michael Goldish; +Cc: Uri Lublin, kvm
On 06/11/2009 04:53 PM, Michael Goldish wrote:
> ----- "Yolkfull Chow"<yzhou@redhat.com> wrote:
>
>
>> Michael, these are the backtrace messages:
>>
>> ...
>> 20090611-064959
>> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>>
>> ERROR: run_once: Test failed: [Errno 12] Cannot allocate memory
>> 20090611-064959
>> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>>
>> DEBUG: run_once: Postprocessing on error...
>> 20090611-065000
>> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>>
>> DEBUG: postprocess_vm: Postprocessing VM 'vm1'...
>> 20090611-065000
>> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>>
>> DEBUG: postprocess_vm: VM object found in environment
>> 20090611-065000
>> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>>
>> DEBUG: send_monitor_cmd: Sending monitor command: screendump
>> /kvm-autotest/client/results/default/kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>/debug/post_vm1.ppm
>> 20090611-065000
>> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024:
>>
>> DEBUG: run_once: Contents of environment: {'vm__vm1':<kvm_vm.VM
>> instance at 0x92999a28>}
>> post-test sysinfo error:
>> Traceback (most recent call last):
>> File "/kvm-autotest/client/common_lib/log.py", line 58, in
>> decorated_func
>> fn(*args, **dargs)
>> File "/kvm-autotest/client/bin/base_sysinfo.py", line 213, in
>> log_after_each_test
>> log.run(test_sysinfodir)
>> File "/kvm-autotest/client/bin/base_sysinfo.py", line 112, in run
>> shell=True, env=env)
>> File "/usr/lib64/python2.4/subprocess.py", line 412, in call
>> return Popen(*args, **kwargs).wait()
>> File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
>> errread, errwrite)
>> File "/usr/lib64/python2.4/subprocess.py", line 902, in
>> _execute_child
>> self.pid = os.fork()
>> OSError: [Errno 12] Cannot allocate memory
>> 2009-06-11 06:50:02,859 Configuring logger for client level
>> FAIL
>> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>>
>> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>>
>> timestamp=1244717402 localtime=Jun 11 06:50:02 Unhandled
>> OSError:
>> [Errno 12] Cannot allocate memory
>> Traceback (most recent call last):
>> File "/kvm-autotest/client/common_lib/test.py", line 304,
>>
>> in _exec
>> self.execute(*p_args, **p_dargs)
>> File "/kvm-autotest/client/common_lib/test.py", line 187,
>>
>> in execute
>> self.run_once(*args, **dargs)
>> File
>> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_runtest_2.py", line 145,
>>
>> in run_once
>> routine_obj.routine(self, params, env)
>> File
>> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_tests.py", line 3071, in
>>
>> run_boot_vms
>> curr_vm_session = kvm_utils.wait_for(curr_vm.ssh_login,
>>
>> 240, 0, 2)
>> File
>> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 797, in
>>
>> wait_for
>> output = func()
>> File
>> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_vm.py",
>> line 728, in ssh_login
>> session = kvm_utils.ssh(address, port, username,
>> password, prompt, timeout)
>> File
>> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 553, in
>> ssh
>> return remote_login(command, password, prompt, "\n",
>> timeout)
>> File
>> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 431, in
>>
>> remote_login
>> sub = kvm_spawn(command, linesep)
>> File
>> "/kvm-autotest/client/tests/kvm_runtest_2/kvm_utils.py", line 114, in
>>
>> __init__
>> (pid, fd) = pty.fork()
>> File "/usr/lib64/python2.4/pty.py", line 108, in fork
>> pid = os.fork()
>> OSError: [Errno 12] Cannot allocate memory
>> Persistent state variable __group_level now set to 1
>> END FAIL
>> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>>
>> kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>
>>
>> timestamp=1244717403 localtime=Jun 11 06:50:03
>> Dropping caches
>> 2009-06-11 06:50:03,409 running: sync
>> JOB ERROR: Unhandled OSError: [Errno 12] Cannot allocate memory
>> Traceback (most recent call last):
>> File "/kvm-autotest/client/bin/job.py", line 978, in step_engine
>> execfile(self.control, global_control_vars, global_control_vars)
>> File "/kvm-autotest/client/control", line 1030, in ?
>> cfg_to_test("kvm_tests.cfg")
>> File "/kvm-autotest/client/control", line 1013, in cfg_to_test
>> current_status = job.run_test("kvm_runtest_2", params=dict,
>> tag=tagname)
>> File "/kvm-autotest/client/bin/job.py", line 44, in wrapped
>> utils.drop_caches()
>> File "/kvm-autotest/client/bin/base_utils.py", line 638, in
>> drop_caches
>> utils.system("sync")
>> File "/kvm-autotest/client/common_lib/utils.py", line 510, in
>> system
>> stdout_tee=sys.stdout, stderr_tee=sys.stderr).exit_status
>> File "/kvm-autotest/client/common_lib/utils.py", line 330, in run
>> bg_job = join_bg_jobs(
>> File "/kvm-autotest/client/common_lib/utils.py", line 37, in
>> __init__
>> stdin=stdin)
>> File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
>> errread, errwrite)
>> File "/usr/lib64/python2.4/subprocess.py", line 902, in
>> _execute_child
>> self.pid = os.fork()
>> OSError: [Errno 12] Cannot allocate memory
>>
>> Persistent state variable __group_level now set to 0
>> END ABORT ---- ---- timestamp=1244717418 localtime=Jun 11
>>
>> 06:50:18 Unhandled OSError: [Errno 12] Cannot allocate memory
>> Traceback (most recent call last):
>> File "/kvm-autotest/client/bin/job.py", line 978, in step_engine
>> execfile(self.control, global_control_vars,
>> global_control_vars)
>> File "/kvm-autotest/client/control", line 1030, in ?
>> cfg_to_test("kvm_tests.cfg")
>> File "/kvm-autotest/client/control", line 1013, in cfg_to_test
>> current_status = job.run_test("kvm_runtest_2", params=dict,
>> tag=tagname)
>> File "/kvm-autotest/client/bin/job.py", line 44, in wrapped
>> utils.drop_caches()
>> File "/kvm-autotest/client/bin/base_utils.py", line 638, in
>> drop_caches
>> utils.system("sync")
>> File "/kvm-autotest/client/common_lib/utils.py", line 510, in
>> system
>> stdout_tee=sys.stdout, stderr_tee=sys.stderr).exit_status
>> File "/kvm-autotest/client/common_lib/utils.py", line 330, in
>> run
>> bg_job = join_bg_jobs(
>> File "/kvm-autotest/client/common_lib/utils.py", line 37, in
>> __init__
>> stdin=stdin)
>> File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
>> errread, errwrite)
>> File "/usr/lib64/python2.4/subprocess.py", line 902, in
>> _execute_child
>> self.pid = os.fork()
>> OSError: [Errno 12] Cannot allocate memory
>> [root@dhcp-66-70-9 kvm_runtest_2]#
>>
> Thanks. It does indeed look like a legitimate OSError in os.fork().
>
> BTW, do you have any idea why the result dir has such a weird name?
> /kvm-autotest/client/results/default/kvm_runtest_2.[RHEL-Server-5.3-64][None][1024][1][qcow2]<no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024>/debug/post_vm1.ppm
>
> And why sometimes a normal looking tag appears (in the log messages):
> no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024
>
> Why all the [] and<> in the weird version? Did you somehow do that intentionally, or is it some sort of bug?
> And why is 'None' there? The tag is supposed to be the test's 'shortname', which is determined by kvm_config.py
> as it parses kvm_tests.cfg (or the config file you're using).
>
> Normally the result dir should just be kvm_runtest_2.shortname, and in this case:
> kvm_runtest_2.no_boundary.local_stg.RHEL.5.3-server-64.no_ksm.boot_vms.e1000.user.size_1024
>
Hi Michael, it's not any sort of defect or problem, we just did that
intentionally for some purpose. And now we had unified it with
autotest's style. Thank you so much for kindly remind. :)
--
Yolkfull
Regards,
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2009-06-11 9:46 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-06-08 4:01 [KVM-AUTOTEST PATCH 0/8] Re-submitting some of the patches on the patch queue Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 1/8] kvm_config: Allow for "=" in the value of a config parameter Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [PATCH 1/3] Make possible to use kvm_config as a standalone program Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [PATCH 2/3] Fixing bad line breaks Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 2/8] RHEL-4.7 step files: fix the initial boot barriers Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [PATCH 3/3] Fix bad logging calls Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 3/8] WinXP step file fixes Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 4/8] RHEL 5.3 " Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 5/8] stepeditor.py: get rid of some shortcuts Lucas Meneghel Rodrigues
2009-06-08 4:01 ` [KVM-AUTOTEST PATCH 6/8] Choose a monitor filename in the constructor of VM class Lucas Meneghel Rodrigues
2009-06-08 15:19 ` Lucas Meneghel Rodrigues
2009-06-08 15:19 ` [KVM-AUTOTEST PATCH 5/8] stepeditor.py: get rid of some shortcuts Lucas Meneghel Rodrigues
2009-06-08 15:18 ` [KVM-AUTOTEST PATCH 4/8] RHEL 5.3 step file fixes Lucas Meneghel Rodrigues
2009-06-08 15:18 ` [KVM-AUTOTEST PATCH 3/8] WinXP " Lucas Meneghel Rodrigues
2009-06-08 15:17 ` [KVM-AUTOTEST PATCH 2/8] RHEL-4.7 step files: fix the initial boot barriers Lucas Meneghel Rodrigues
2009-06-08 15:16 ` [KVM-AUTOTEST PATCH 1/8] kvm_config: Allow for "=" in the value of a config parameter Lucas Meneghel Rodrigues
2009-06-09 8:41 ` [KVM-AUTOTEST PATCH] A test patch - Boot VMs until one of them becomes unresponsive Yolkfull Chow
2009-06-09 9:37 ` Yaniv Kaul
2009-06-09 9:57 ` Michael Goldish
2009-06-09 12:45 ` Uri Lublin
2009-06-10 8:12 ` Yolkfull Chow
[not found] <2021156332.1536421244540393444.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
2009-06-09 9:44 ` Michael Goldish
2009-06-10 8:10 ` Yolkfull Chow
[not found] <219655199.1650051244627445364.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
2009-06-10 10:03 ` Michael Goldish
2009-06-10 10:31 ` Yolkfull Chow
[not found] <443392010.1660281244634434026.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
2009-06-10 11:52 ` Michael Goldish
2009-06-11 3:37 ` Yolkfull Chow
[not found] <120253480.1747631244710010660.JavaMail.root@zmail05.collab.prod.int.phx2.redhat.com>
2009-06-11 8:53 ` Michael Goldish
2009-06-11 9:46 ` Yolkfull Chow
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).