* [PATCH 1/4] client.bin.net.net_utils: Introduce get_local_ip()
2011-05-24 7:08 [PATCH 0/4] Make possible to run client tests as subtests Lucas Meneghel Rodrigues
@ 2011-05-24 7:08 ` Lucas Meneghel Rodrigues
2011-05-24 7:08 ` [PATCH 2/4] client: Make it possible to run subtests in autotest Lucas Meneghel Rodrigues
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-05-24 7:08 UTC (permalink / raw)
To: autotest; +Cc: kvm, Lucas Meneghel Rodrigues, Jiri Zupka
Get ip address in local system which can communicate
with a given ip, that will be useful for subtests like
netperf2.
Signed-off-by: Jiri Zupka <jzupka@redhat.com>
---
client/bin/net/net_utils.py | 17 +++++++++++++++++
1 files changed, 17 insertions(+), 0 deletions(-)
diff --git a/client/bin/net/net_utils.py b/client/bin/net/net_utils.py
index 868958c..7c96ba0 100644
--- a/client/bin/net/net_utils.py
+++ b/client/bin/net/net_utils.py
@@ -5,6 +5,7 @@ This library is to release in the public repository.
import commands, os, re, socket, sys, time, struct
from autotest_lib.client.common_lib import error
+from autotest_lib.client.bin import utils as client_utils
import utils
TIMEOUT = 10 # Used for socket timeout and barrier timeout
@@ -27,6 +28,22 @@ class network_utils(object):
utils.system('/sbin/ifconfig -a')
+ def get_ip_local(self, query_ip, netmask="24"):
+ """
+ Get ip address in local system which can communicate with query_ip.
+
+ @param query_ip: IP of client which wants to communicate with
+ autotest machine.
+ @return: IP address which can communicate with query_ip
+ """
+ ip = client_utils.system_output("ip addr show to %s/%s" %
+ (query_ip, netmask))
+ ip = re.search(r"inet ([0-9.]*)/",ip)
+ if ip is None:
+ return ip
+ return ip.group(1)
+
+
def disable_ip_local_loopback(self, ignore_status=False):
utils.system("echo '1' > /proc/sys/net/ipv4/route/no_local_loopback",
ignore_status=ignore_status)
--
1.7.5.1
^ permalink raw reply related [flat|nested] 7+ messages in thread* [PATCH 2/4] client: Make it possible to run subtests in autotest
2011-05-24 7:08 [PATCH 0/4] Make possible to run client tests as subtests Lucas Meneghel Rodrigues
2011-05-24 7:08 ` [PATCH 1/4] client.bin.net.net_utils: Introduce get_local_ip() Lucas Meneghel Rodrigues
@ 2011-05-24 7:08 ` Lucas Meneghel Rodrigues
2011-05-24 7:08 ` [PATCH 3/4] tools: Make html_report to deal with subtest results Lucas Meneghel Rodrigues
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-05-24 7:08 UTC (permalink / raw)
To: autotest; +Cc: kvm, Lucas Meneghel Rodrigues, Jiri Zupka
Do that by adding an additional utility function in the
test object, that will call another test from inside the
test scope.
Signed-off-by: Jiri Zupka <jzupka@redhat.com>
---
client/bin/client_logging_config.py | 5 +++--
client/common_lib/base_job.py | 2 ++
client/common_lib/logging_config.py | 3 ++-
client/common_lib/test.py | 21 ++++++++++++++++++++-
4 files changed, 27 insertions(+), 4 deletions(-)
diff --git a/client/bin/client_logging_config.py b/client/bin/client_logging_config.py
index a59b078..28c007d 100644
--- a/client/bin/client_logging_config.py
+++ b/client/bin/client_logging_config.py
@@ -12,8 +12,9 @@ class ClientLoggingConfig(logging_config.LoggingConfig):
def configure_logging(self, results_dir=None, verbose=False):
- super(ClientLoggingConfig, self).configure_logging(use_console=True,
- verbose=verbose)
+ super(ClientLoggingConfig, self).configure_logging(
+ use_console=self.use_console,
+ verbose=verbose)
if results_dir:
log_dir = os.path.join(results_dir, 'debug')
diff --git a/client/common_lib/base_job.py b/client/common_lib/base_job.py
index 843c0e8..eef9efc 100644
--- a/client/common_lib/base_job.py
+++ b/client/common_lib/base_job.py
@@ -1117,6 +1117,7 @@ class base_job(object):
tag_parts = []
# build up the parts of the tag used for the test name
+ master_testpath = dargs.get('master_testpath', "")
base_tag = dargs.pop('tag', None)
if base_tag:
tag_parts.append(str(base_tag))
@@ -1132,6 +1133,7 @@ class base_job(object):
if subdir_tag:
tag_parts.append(subdir_tag)
subdir = '.'.join([testname] + tag_parts)
+ subdir = os.path.join(master_testpath, subdir)
tag = '.'.join(tag_parts)
return full_testname, subdir, tag
diff --git a/client/common_lib/logging_config.py b/client/common_lib/logging_config.py
index afe754a..9114d7a 100644
--- a/client/common_lib/logging_config.py
+++ b/client/common_lib/logging_config.py
@@ -32,9 +32,10 @@ class LoggingConfig(object):
fmt='%(asctime)s %(levelname)-5.5s| %(message)s',
datefmt='%H:%M:%S')
- def __init__(self):
+ def __init__(self, use_console=True):
self.logger = logging.getLogger()
self.global_level = logging.DEBUG
+ self.use_console = use_console
@classmethod
diff --git a/client/common_lib/test.py b/client/common_lib/test.py
index c55d23b..d5564c3 100644
--- a/client/common_lib/test.py
+++ b/client/common_lib/test.py
@@ -465,6 +465,24 @@ class base_test(object):
self.job.enable_warnings("NETWORK")
+ def runsubtest(self, url, *args, **dargs):
+ """
+ Execute another autotest test from inside the current test's scope.
+
+ @param test: Parent test.
+ @param url: Url of new test.
+ @param tag: Tag added to test name.
+ @param args: Args for subtest.
+ @param dargs: Dictionary with args for subtest.
+ @iterations: Number of subtest iterations.
+ @profile_only: If true execute one profiled run.
+ """
+ dargs["profile_only"] = dargs.get("profile_only", False)
+ test_basepath = self.outputdir[len(self.job.resultdir + "/"):]
+ self.job.run_test(url, master_testpath=test_basepath,
+ *args, **dargs)
+
+
def _get_nonstar_args(func):
"""Extract all the (normal) function parameter names.
@@ -658,7 +676,8 @@ def runtest(job, url, tag, args, dargs,
if not bindir:
raise error.TestError(testname + ': test does not exist')
- outputdir = os.path.join(job.resultdir, testname)
+ subdir = os.path.join(dargs.pop('master_testpath', ""), testname)
+ outputdir = os.path.join(job.resultdir, subdir)
if tag:
outputdir += '.' + tag
--
1.7.5.1
^ permalink raw reply related [flat|nested] 7+ messages in thread* [PATCH 3/4] tools: Make html_report to deal with subtest results
2011-05-24 7:08 [PATCH 0/4] Make possible to run client tests as subtests Lucas Meneghel Rodrigues
2011-05-24 7:08 ` [PATCH 1/4] client.bin.net.net_utils: Introduce get_local_ip() Lucas Meneghel Rodrigues
2011-05-24 7:08 ` [PATCH 2/4] client: Make it possible to run subtests in autotest Lucas Meneghel Rodrigues
@ 2011-05-24 7:08 ` Lucas Meneghel Rodrigues
2011-05-24 7:08 ` [PATCH 4/4] KVM test: Rewrite netperf in terms of subtest Lucas Meneghel Rodrigues
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-05-24 7:08 UTC (permalink / raw)
To: autotest; +Cc: kvm
Signed-off-by: Jiri Zupka <jzupka@redhat.com>
---
client/tools/html_report.py | 124 ++++++++++++++++++++++++-------------------
1 files changed, 69 insertions(+), 55 deletions(-)
diff --git a/client/tools/html_report.py b/client/tools/html_report.py
index c4e97b2..563a7a9 100755
--- a/client/tools/html_report.py
+++ b/client/tools/html_report.py
@@ -1372,7 +1372,7 @@ function processList(ul) {
}
"""
-stimelist = []
+
def make_html_file(metadata, results, tag, host, output_file_name, dirname):
@@ -1430,11 +1430,12 @@ return true;
total_failed = 0
total_passed = 0
for res in results:
- total_executed += 1
- if res['status'] == 'GOOD':
- total_passed += 1
- else:
- total_failed += 1
+ if results[res][2] != None:
+ total_executed += 1
+ if results[res][2]['status'] == 'GOOD':
+ total_passed += 1
+ else:
+ total_failed += 1
stat_str = 'No test cases executed'
if total_executed > 0:
failed_perct = int(float(total_failed)/float(total_executed)*100)
@@ -1471,39 +1472,46 @@ id="t1" class="stats table-autosort:4 table-autofilter table-stripeclass:alterna
<tbody>
"""
print >> output, result_table_prefix
- for res in results:
- print >> output, '<tr>'
- print >> output, '<td align="left">%s</td>' % res['time']
- print >> output, '<td align="left">%s</td>' % res['testcase']
- if res['status'] == 'GOOD':
- print >> output, '<td align=\"left\"><b><font color="#00CC00">PASS</font></b></td>'
- elif res['status'] == 'FAIL':
- print >> output, '<td align=\"left\"><b><font color="red">FAIL</font></b></td>'
- elif res['status'] == 'ERROR':
- print >> output, '<td align=\"left\"><b><font color="red">ERROR!</font></b></td>'
- else:
- print >> output, '<td align=\"left\">%s</td>' % res['status']
- # print exec time (seconds)
- print >> output, '<td align="left">%s</td>' % res['exec_time_sec']
- # print log only if test failed..
- if res['log']:
- #chop all '\n' from log text (to prevent html errors)
- rx1 = re.compile('(\s+)')
- log_text = rx1.sub(' ', res['log'])
-
- # allow only a-zA-Z0-9_ in html title name
- # (due to bug in MS-explorer)
- rx2 = re.compile('([^a-zA-Z_0-9])')
- updated_tag = rx2.sub('_', res['title'])
-
- html_body_text = '<html><head><title>%s</title></head><body>%s</body></html>' % (str(updated_tag), log_text)
- print >> output, '<td align=\"left\"><A HREF=\"#\" onClick=\"popup(\'%s\',\'%s\')\">Info</A></td>' % (str(updated_tag), str(html_body_text))
- else:
- print >> output, '<td align=\"left\"></td>'
- # print execution time
- print >> output, '<td align="left"><A HREF=\"%s\">Debug</A></td>' % os.path.join(dirname, res['title'], "debug")
+ def print_result(result, indent):
+ while result != []:
+ r = result.pop(0)
+ print r
+ res = results[r][2]
+ print >> output, '<tr>'
+ print >> output, '<td align="left">%s</td>' % res['time']
+ print >> output, '<td align="left" style="padding-left:%dpx">%s</td>' % (indent * 20, res['title'])
+ if res['status'] == 'GOOD':
+ print >> output, '<td align=\"left\"><b><font color="#00CC00">PASS</font></b></td>'
+ elif res['status'] == 'FAIL':
+ print >> output, '<td align=\"left\"><b><font color="red">FAIL</font></b></td>'
+ elif res['status'] == 'ERROR':
+ print >> output, '<td align=\"left\"><b><font color="red">ERROR!</font></b></td>'
+ else:
+ print >> output, '<td align=\"left\">%s</td>' % res['status']
+ # print exec time (seconds)
+ print >> output, '<td align="left">%s</td>' % res['exec_time_sec']
+ # print log only if test failed..
+ if res['log']:
+ #chop all '\n' from log text (to prevent html errors)
+ rx1 = re.compile('(\s+)')
+ log_text = rx1.sub(' ', res['log'])
+
+ # allow only a-zA-Z0-9_ in html title name
+ # (due to bug in MS-explorer)
+ rx2 = re.compile('([^a-zA-Z_0-9])')
+ updated_tag = rx2.sub('_', res['title'])
+
+ html_body_text = '<html><head><title>%s</title></head><body>%s</body></html>' % (str(updated_tag), log_text)
+ print >> output, '<td align=\"left\"><A HREF=\"#\" onClick=\"popup(\'%s\',\'%s\')\">Info</A></td>' % (str(updated_tag), str(html_body_text))
+ else:
+ print >> output, '<td align=\"left\"></td>'
+ # print execution time
+ print >> output, '<td align="left"><A HREF=\"%s\">Debug</A></td>' % os.path.join(dirname, res['subdir'], "debug")
- print >> output, '</tr>'
+ print >> output, '</tr>'
+ print_result(results[r][1], indent + 1)
+
+ print_result(results[""][1], 0)
print >> output, "</tbody></table>"
@@ -1531,21 +1539,27 @@ id="t1" class="stats table-autosort:4 table-autofilter table-stripeclass:alterna
output.close()
-def parse_result(dirname, line):
+def parse_result(dirname, line, results_data):
"""
Parse job status log line.
@param dirname: Job results dir
@param line: Status log line.
+ @param results_data: Dictionary with for results.
"""
parts = line.split()
if len(parts) < 4:
return None
- global stimelist
+ global tests
if parts[0] == 'START':
pair = parts[3].split('=')
stime = int(pair[1])
- stimelist.append(stime)
+ results_data[parts[1]] = [stime, [], None]
+ try:
+ parent_test = re.findall(r".*/", parts[1])[0][:-1]
+ results_data[parent_test][1].append(parts[1])
+ except IndexError:
+ results_data[""][1].append(parts[1])
elif (parts[0] == 'END'):
result = {}
@@ -1562,24 +1576,25 @@ def parse_result(dirname, line):
result['exec_time_sec'] = 'na'
tag = parts[3]
+ result['subdir'] = parts[2]
# assign actual values
rx = re.compile('^(\w+)\.(.*)$')
m1 = rx.findall(parts[3])
- result['testcase'] = str(tag)
+ if len(m1):
+ result['testcase'] = m1[0][1]
+ else:
+ result['testcase'] = parts[3]
result['title'] = str(tag)
result['status'] = parts[1]
if result['status'] != 'GOOD':
result['log'] = get_exec_log(dirname, tag)
- if len(stimelist)>0:
+ if len(results_data)>0:
pair = parts[4].split('=')
- try:
- etime = int(pair[1])
- stime = stimelist.pop()
- total_exec_time_sec = etime - stime
- result['exec_time_sec'] = total_exec_time_sec
- except ValueError:
- result['exec_time_sec'] = "Unknown"
- return result
+ etime = int(pair[1])
+ stime = results_data[parts[2]][0]
+ total_exec_time_sec = etime - stime
+ result['exec_time_sec'] = total_exec_time_sec
+ results_data[parts[2]][2] = result
return None
@@ -1702,16 +1717,15 @@ def create_report(dirname, html_path='', output_file_name=None):
host = get_info_file(os.path.join(sysinfo_dir, 'hostname'))
rx = re.compile('^\s+[END|START].*$')
# create the results set dict
- results_data = []
+ results_data = {}
+ results_data[""] = [0, [], None]
if os.path.exists(status_file_name):
f = open(status_file_name, "r")
lines = f.readlines()
f.close()
for line in lines:
if rx.match(line):
- result_dict = parse_result(dirname, line)
- if result_dict:
- results_data.append(result_dict)
+ parse_result(dirname, line, results_data)
# create the meta info dict
metalist = {
'uname': get_info_file(os.path.join(sysinfo_dir, 'uname')),
--
1.7.5.1
^ permalink raw reply related [flat|nested] 7+ messages in thread* [PATCH 4/4] KVM test: Rewrite netperf in terms of subtest
2011-05-24 7:08 [PATCH 0/4] Make possible to run client tests as subtests Lucas Meneghel Rodrigues
` (2 preceding siblings ...)
2011-05-24 7:08 ` [PATCH 3/4] tools: Make html_report to deal with subtest results Lucas Meneghel Rodrigues
@ 2011-05-24 7:08 ` Lucas Meneghel Rodrigues
2011-05-24 14:24 ` [PATCH 0/4] Make possible to run client tests as subtests Lucas Meneghel Rodrigues
2011-06-01 19:31 ` Lucas Meneghel Rodrigues
5 siblings, 0 replies; 7+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-05-24 7:08 UTC (permalink / raw)
To: autotest; +Cc: kvm
As the first usage of the new subtest function, reimplement
netperf using subtest. This way we just don't have to care
about replicating build and other boilerplate code that is
better handled by autotest itself.
Signed-off-by: Lucas Meneghel Rodrigues <lmr@redhat.com>
---
client/virt/tests/netperf.py | 117 +++++++++++-------------------------------
1 files changed, 30 insertions(+), 87 deletions(-)
diff --git a/client/virt/tests/netperf.py b/client/virt/tests/netperf.py
index 8a80d13..ab742f1 100644
--- a/client/virt/tests/netperf.py
+++ b/client/virt/tests/netperf.py
@@ -1,17 +1,18 @@
import logging, os, signal
from autotest_lib.client.common_lib import error
from autotest_lib.client.bin import utils
-from autotest_lib.client.virt import aexpect, virt_utils
+from autotest_lib.client.bin.net import net_utils
+from autotest_lib.client.virt import aexpect, virt_utils, virt_test_utils
+
def run_netperf(test, params, env):
"""
Network stress test with netperf.
1) Boot up a VM with multiple nics.
- 2) Launch netserver on guest.
- 3) Execute multiple netperf clients on host in parallel
- with different protocols.
- 4) Output the test result.
+ 2) Launch netperf server on host.
+ 3) Execute netperf client on guest.
+ 4) Output the test results.
@param test: KVM test object.
@param params: Dictionary with the test parameters.
@@ -21,86 +22,28 @@ def run_netperf(test, params, env):
vm.verify_alive()
login_timeout = int(params.get("login_timeout", 360))
session = vm.wait_for_login(timeout=login_timeout)
- session.close()
- session_serial = vm.wait_for_serial_login(timeout=login_timeout)
-
- netperf_dir = os.path.join(os.environ['AUTODIR'], "tests/netperf2")
- setup_cmd = params.get("setup_cmd")
-
- firewall_flush = "iptables -F"
- session_serial.cmd_output(firewall_flush)
- try:
- utils.run("iptables -F")
- except:
- pass
-
- for i in params.get("netperf_files").split():
- vm.copy_files_to(os.path.join(netperf_dir, i), "/tmp")
-
- try:
- session_serial.cmd(firewall_flush)
- except aexpect.ShellError:
- logging.warning("Could not flush firewall rules on guest")
-
- session_serial.cmd(setup_cmd % "/tmp", timeout=200)
- session_serial.cmd(params.get("netserver_cmd") % "/tmp")
-
- if "tcpdump" in env and env["tcpdump"].is_alive():
- # Stop the background tcpdump process
- try:
- logging.debug("Stopping the background tcpdump")
- env["tcpdump"].close()
- except:
- pass
-
- def netperf(i=0):
- guest_ip = vm.get_address(i)
- logging.info("Netperf_%s: netserver %s" % (i, guest_ip))
- result_file = os.path.join(test.resultsdir, "output_%s_%s"
- % (test.iteration, i ))
- list_fail = []
- result = open(result_file, "w")
- result.write("Netperf test results\n")
-
- for p in params.get("protocols").split():
- packet_size = params.get("packet_size", "1500")
- for size in packet_size.split():
- cmd = params.get("netperf_cmd") % (netperf_dir, p,
- guest_ip, size)
- logging.info("Netperf_%s: protocol %s" % (i, p))
- try:
- netperf_output = utils.system_output(cmd,
- retain_output=True)
- result.write("%s\n" % netperf_output)
- except:
- logging.error("Test of protocol %s failed", p)
- list_fail.append(p)
-
- result.close()
- if list_fail:
- raise error.TestFail("Some netperf tests failed: %s" %
- ", ".join(list_fail))
-
- try:
- logging.info("Setup and run netperf clients on host")
- utils.run(setup_cmd % netperf_dir)
-
- bg = []
- nic_num = len(params.get("nics").split())
- for i in range(nic_num):
- bg.append(virt_utils.Thread(netperf, (i,)))
- bg[i].start()
- completed = False
- while not completed:
- completed = True
- for b in bg:
- if b.is_alive():
- completed = False
- finally:
- try:
- for b in bg:
- if b:
- b.join()
- finally:
- session_serial.cmd_output("killall netserver")
+ session.cmd("iptables -F")
+
+ timeout = int(params.get("test_timeout", 300))
+ control_path = os.path.join(test.tmpdir, "netperf_client.control")
+
+ guest_ip = vm.get_address()
+ host_ip = net_utils.network().get_ip_local(guest_ip)
+ if host_ip is not None:
+ c = open(control_path, 'w')
+ c.write('job.run_test(url="netperf2", server_ip="%s", client_ip="%s", '
+ 'role="client", tag="client")' % (host_ip, guest_ip))
+ c.close()
+ guest = virt_utils.Thread(virt_test_utils.run_autotest,
+ (vm, session, control_path,
+ timeout, test.outputdir, params))
+ guest.start()
+
+ netperf_server_args = {"url":"netperf2", "tag": "server",
+ "server_ip": host_ip, "client_ip": guest_ip,
+ "role": "server"}
+ test.runsubtest(**netperf_server_args)
+
+ else:
+ raise error.TestError("Host cannot reach client over the network")
--
1.7.5.1
^ permalink raw reply related [flat|nested] 7+ messages in thread* Re: [PATCH 0/4] Make possible to run client tests as subtests
2011-05-24 7:08 [PATCH 0/4] Make possible to run client tests as subtests Lucas Meneghel Rodrigues
` (3 preceding siblings ...)
2011-05-24 7:08 ` [PATCH 4/4] KVM test: Rewrite netperf in terms of subtest Lucas Meneghel Rodrigues
@ 2011-05-24 14:24 ` Lucas Meneghel Rodrigues
2011-06-01 19:31 ` Lucas Meneghel Rodrigues
5 siblings, 0 replies; 7+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-05-24 14:24 UTC (permalink / raw)
To: autotest; +Cc: kvm
On Tue, 2011-05-24 at 04:08 -0300, Lucas Meneghel Rodrigues wrote:
> In order to avoid duplication of code, make it possible to run the
> existing autotest client tests as subtests. This patchset is result
> of work on Jiri Zupka original single patch, the differences:
>
> * Removed example subtest KVM autotest test
> * Renamed some API introduced to net_utils for consistency
> * Rewrote netperf in terms of the new 'subtest' infrastructure
For the record of documentation, there are still some things bothering
me with this patchset:
* I still didn't check whether the changes to autotest core (test class)
do break unittests. We might want to write a unittest for the new
methods of the test class.
* Maybe rather than simply calling job.runtest() we would be better of
executing the subtest on a new thread?
* The netperf reimplementation clearly lacks the functionality of the
current implementation. We need more work on it.
I asked Jiri to pick the patchset and look at these points of
improvement.
> Lucas Meneghel Rodrigues (4):
> client.bin.net.net_utils: Introduce get_local_ip()
> client: Make it possible to run subtests in autotest
> tools: Make html_report to deal with subtest results
> KVM test: Rewrite netperf in terms of subtest
>
> client/bin/client_logging_config.py | 5 +-
> client/bin/net/net_utils.py | 17 +++++
> client/common_lib/base_job.py | 2 +
> client/common_lib/logging_config.py | 3 +-
> client/common_lib/test.py | 21 ++++++-
> client/tools/html_report.py | 124 +++++++++++++++++++---------------
> client/virt/tests/netperf.py | 117 +++++++++------------------------
> 7 files changed, 143 insertions(+), 146 deletions(-)
>
^ permalink raw reply [flat|nested] 7+ messages in thread* Re: [PATCH 0/4] Make possible to run client tests as subtests
2011-05-24 7:08 [PATCH 0/4] Make possible to run client tests as subtests Lucas Meneghel Rodrigues
` (4 preceding siblings ...)
2011-05-24 14:24 ` [PATCH 0/4] Make possible to run client tests as subtests Lucas Meneghel Rodrigues
@ 2011-06-01 19:31 ` Lucas Meneghel Rodrigues
5 siblings, 0 replies; 7+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-06-01 19:31 UTC (permalink / raw)
To: autotest; +Cc: kvm
Ok, I have applied patches from 1 to 3, after verifying the unittests.
I haven't applied patch 4, as this would bring some loss of
functionality of the current netperf test. I count on you guys (Jiri
and Lukas) to write the tests that actually use the functionality.
Cheers,
On Tue, May 24, 2011 at 4:08 AM, Lucas Meneghel Rodrigues
<lmr@redhat.com> wrote:
> In order to avoid duplication of code, make it possible to run the
> existing autotest client tests as subtests. This patchset is result
> of work on Jiri Zupka original single patch, the differences:
>
> * Removed example subtest KVM autotest test
> * Renamed some API introduced to net_utils for consistency
> * Rewrote netperf in terms of the new 'subtest' infrastructure
>
> Lucas Meneghel Rodrigues (4):
> client.bin.net.net_utils: Introduce get_local_ip()
> client: Make it possible to run subtests in autotest
> tools: Make html_report to deal with subtest results
> KVM test: Rewrite netperf in terms of subtest
>
> client/bin/client_logging_config.py | 5 +-
> client/bin/net/net_utils.py | 17 +++++
> client/common_lib/base_job.py | 2 +
> client/common_lib/logging_config.py | 3 +-
> client/common_lib/test.py | 21 ++++++-
> client/tools/html_report.py | 124 +++++++++++++++++++---------------
> client/virt/tests/netperf.py | 117 +++++++++------------------------
> 7 files changed, 143 insertions(+), 146 deletions(-)
>
> --
> 1.7.5.1
>
> _______________________________________________
> Autotest mailing list
> Autotest@test.kernel.org
> http://test.kernel.org/cgi-bin/mailman/listinfo/autotest
>
--
Lucas
^ permalink raw reply [flat|nested] 7+ messages in thread