* [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool
@ 2018-12-06 10:59 mazliana.mohamad
2018-12-06 10:59 ` [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function mazliana.mohamad
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: mazliana.mohamad @ 2018-12-06 10:59 UTC (permalink / raw)
To: openembedded-core
From: Mazliana <mazliana.mohamad@intel.com>
Integrated the test-case-mgmt "store", "report" with "manual execution".Manual test execution is one
of an alternative test case management tool of Testopia. This script has only a bare-minimum function.
Bare-minimum function refer to function where the user can only execute all of the test cases that
component have.
To use these scripts, first source oe environment, then run the entry point script to look for help.
$ test-case-mgmt
To execute manual test cases, execute the below
$ test-case-mgmt manualexecution <manualjsonfile>
By default testresults.json store in poky/<build_dir>/tmp/log/manual
[YOCTO #12651]
Signed-off-by: Mazliana <mazliana.mohamad@intel.com>
---
scripts/test-case-mgmt | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/scripts/test-case-mgmt b/scripts/test-case-mgmt
index b832ee2..ef69db4 100755
--- a/scripts/test-case-mgmt
+++ b/scripts/test-case-mgmt
@@ -17,6 +17,11 @@
# To store test result & log, execute the below
# $ test-case-mgmt store <source_dir> <git_branch>
#
+# To execute manual test cases, execute the below
+# $ test-case-mgmt manualexecution <manualjsonfile>
+#
+# By default testresults.json for manualexecution in poky/<build>/tmp/log/manual/
+#
# Copyright (c) 2018, Intel Corporation.
#
# This program is free software; you can redistribute it and/or modify it
@@ -40,6 +45,7 @@ import argparse_oe
import scriptutils
import testcasemgmt.store
import testcasemgmt.report
+import testcasemgmt.manualexecution
logger = scriptutils.logger_create('test-case-mgmt')
def _validate_user_input_arguments(args):
@@ -74,6 +80,8 @@ def main():
testcasemgmt.store.register_commands(subparsers)
subparsers.add_subparser_group('report', 'Reporting for test result & log', 200)
testcasemgmt.report.register_commands(subparsers)
+ subparsers.add_subparser_group('manualexecution', 'Execute manual test cases', 100)
+ testcasemgmt.manualexecution.register_commands(subparsers)
args = parser.parse_args()
if args.debug:
logger.setLevel(logging.DEBUG)
--
2.7.4
^ permalink raw reply related [flat|nested] 5+ messages in thread* [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function 2018-12-06 10:59 [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool mazliana.mohamad @ 2018-12-06 10:59 ` mazliana.mohamad 2018-12-06 12:07 ` Mohamad, Mazliana 2018-12-06 11:33 ` ✗ patchtest: failure for "scripts/test-case-mgmt: add "m..." and 1 more Patchwork 2018-12-06 12:08 ` FW: [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool Mohamad, Mazliana 2 siblings, 1 reply; 5+ messages in thread From: mazliana.mohamad @ 2018-12-06 10:59 UTC (permalink / raw) To: openembedded-core From: Mazliana <mazliana.mohamad@intel.com> Manual execution is a helper script to execute all manual test cases in baseline command. Basically, this script will show the steps and expected results. Then, in the end of the steps, user need to give their input for the result. The input based on the passed/failed/skipped status. The result given will be stored in testresults.json and it will stored the log error given by user input & the configuration. The output test result for json file was created by using OEQA library. The configuration part is manually key-in by user. The system allow user to specify how many configuration they want to add and they needs to define the configuration name and value pair needed. The configuration part was added because we want to standardize the output test result format between automation and manual execution. [YOCTO #12651] Signed-off-by: Mazliana <mazliana.mohamad@intel.com> --- scripts/lib/testcasemgmt/manualexecution.py | 141 ++++++++++++++++++++++++++++ 1 file changed, 141 insertions(+) create mode 100644 scripts/lib/testcasemgmt/manualexecution.py diff --git a/scripts/lib/testcasemgmt/manualexecution.py b/scripts/lib/testcasemgmt/manualexecution.py new file mode 100644 index 0000000..2911c1e --- /dev/null +++ b/scripts/lib/testcasemgmt/manualexecution.py @@ -0,0 +1,141 @@ +# test case management tool - manual execution from testopia test cases +# +# Copyright (c) 2018, Intel Corporation. +# +# This program is free software; you can redistribute it and/or modify it +# under the terms and conditions of the GNU General Public License, +# version 2, as published by the Free Software Foundation. +# +# This program is distributed in the hope it will be useful, but WITHOUT +# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or +# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for +# more details. +# +import argparse +import json +import os +import sys +import datetime +import scriptpath +import re +scriptpath.add_oe_lib_path() +scriptpath.add_bitbake_lib_path() +import bb.utils +from oeqa.core.runner import OETestResultJSONHelper + +class ManualTestRunner(object): + def __init__(self): + self.jdata = '' + self.testmodule = '' + self.testsuite = '' + self.testcase = '' + self.configuration = '' + self.host_distro = '' + self.host_name = '' + self.machine = '' + self.starttime = '' + self.result_id = '' + + def read_json(self, file): + self.jdata = json.load(open('%s' % file)) + self.testcase = [] + self.testmodule = self.jdata[0]['test']['@alias'].split('.', 2)[0] + self.testsuite = self.jdata[0]['test']['@alias'].split('.', 2)[1] + for i in range(0, len(self.jdata)): + self.testcase.append(self.jdata[i]['test']['@alias'].split('.', 2)[2]) + return self.jdata, self.testmodule, self.testsuite, self.testcase + + def get_input(self, config): + while True: + output = input('{} = '.format(config)) + if re.match('^[a-zA-Z0-9_]+$', output): + break + print('Only alphanumeric and underscore are allowed. Please try again') + return output + + def create_config(self): + self.configuration = {} + while True: + try: + conf_total = int(input('\nPlease provide how many configuration you want to save \n')) + break + except ValueError: + print('Invalid input. Please provide input as a number not character.') + for i in range(conf_total): + print('---------------------------------------------') + print('This is your %s ' % (i + 1) + 'configuration. Please provide configuration name and its value') + print('---------------------------------------------') + self.name_conf = self.get_input('Configuration Name') + self.value_conf = self.get_input('Configuration Value') + print('---------------------------------------------\n') + self.configuration[self.name_conf.upper()] = self.value_conf + self.currentDT = datetime.datetime.now() + self.starttime = self.currentDT.strftime('%Y%m%d%H%M%S') + self.test_type = self.testmodule + self.configuration['STARTTIME'] = self.starttime + self.configuration['TEST_TYPE'] = self.test_type + return self.configuration + + def create_result_id(self): + self.result_id = 'manual_' + self.test_type + '_' + self.starttime + return self.result_id + + def execute_test_steps(self, testID): + temp = {} + testcaseID = self.testmodule + '.' + self.testsuite + '.' + self.testcase[testID] + print('------------------------------------------------------------------------') + print('Executing test case:' + '' '' + self.testcase[testID]) + print('------------------------------------------------------------------------') + print('You have total ' + max(self.jdata[testID]['test']['execution'].keys()) + ' test steps to be executed.') + print('------------------------------------------------------------------------\n') + + for step in range(1, int(max(self.jdata[testID]['test']['execution'].keys()) + '1')): + print('Step %s: ' % step + self.jdata[testID]['test']['execution']['%s' % step]['action']) + print('Expected output: ' + self.jdata[testID]['test']['execution']['%s' % step]['expected_results']) + if step == int(max(self.jdata[testID]['test']['execution'].keys())): + done = input('\nPlease provide test results: (P)assed/(F)ailed/(B)locked? \n') + break + else: + done = input('\nPlease press ENTER when you are done to proceed to next step.\n') + step = step + 1 + + if done == 'p' or done == 'P': + res = 'PASSED' + elif done == 'f' or done == 'F': + res = 'FAILED' + log_input = input('\nPlease enter the error and the description of the log: (Ex:log:211 Error Bitbake)\n') + elif done == 'b' or done == 'B': + res = 'BLOCKED' + else: + res = 'SKIPPED' + + if res == 'FAILED': + temp.update({testcaseID: {'status': '%s' % res, 'log': '%s' % log_input}}) + else: + temp.update({testcaseID: {'status': '%s' % res}}) + return temp + +def manualexecution(args, logger): + basepath = os.getcwd() + write_dir = basepath + '/tmp/log/manual/' + sys.path.insert(0, write_dir) + manualtestcasejson = ManualTestRunner() + manualtestcasejson.read_json(args.file) + test_result = {} + manualtestcasejson.create_config() + manualtestcasejson.create_result_id() + print('\nTotal number of test cases in this test suite: ' + '%s\n' % len(manualtestcasejson.jdata)) + for i in range(0, len(manualtestcasejson.jdata)): + test_results = manualtestcasejson.execute_test_steps(i) + test_result.update(test_results) + resultjsonhelper = OETestResultJSONHelper() + resultjsonhelper.dump_testresult_file(write_dir, manualtestcasejson.configuration, manualtestcasejson.result_id, test_result) + return 0 + +def register_commands(subparsers): + """Register subcommands from this plugin""" + parser_build = subparsers.add_parser('manualexecution', help='Helper script for results populating during manual test execution.', + description='Helper script for results populating during manual test execution. You can find file in meta/lib/oeqa/manual/', + group='manualexecution') + parser_build.set_defaults(func=manualexecution) + parser_build.add_argument('file', help='Specify path to manual test case JSON file.Note: Please use \"\" to encapsulate the file path.') -- 2.7.4 ^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function 2018-12-06 10:59 ` [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function mazliana.mohamad @ 2018-12-06 12:07 ` Mohamad, Mazliana 0 siblings, 0 replies; 5+ messages in thread From: Mohamad, Mazliana @ 2018-12-06 12:07 UTC (permalink / raw) To: openembedded-core@lists.openembedded.org Please ignore this patch -----Original Message----- From: openembedded-core-bounces@lists.openembedded.org [mailto:openembedded-core-bounces@lists.openembedded.org] On Behalf Of mazliana.mohamad@intel.com Sent: Thursday, December 6, 2018 6:59 PM To: openembedded-core@lists.openembedded.org Subject: [OE-core] [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function From: Mazliana <mazliana.mohamad@intel.com> Manual execution is a helper script to execute all manual test cases in baseline command. Basically, this script will show the steps and expected results. Then, in the end of the steps, user need to give their input for the result. The input based on the passed/failed/skipped status. The result given will be stored in testresults.json and it will stored the log error given by user input & the configuration. The output test result for json file was created by using OEQA library. The configuration part is manually key-in by user. The system allow user to specify how many configuration they want to add and they needs to define the configuration name and value pair needed. The configuration part was added because we want to standardize the output test result format between automation and manual execution. [YOCTO #12651] Signed-off-by: Mazliana <mazliana.mohamad@intel.com> --- scripts/lib/testcasemgmt/manualexecution.py | 141 ++++++++++++++++++++++++++++ 1 file changed, 141 insertions(+) create mode 100644 scripts/lib/testcasemgmt/manualexecution.py diff --git a/scripts/lib/testcasemgmt/manualexecution.py b/scripts/lib/testcasemgmt/manualexecution.py new file mode 100644 index 0000000..2911c1e --- /dev/null +++ b/scripts/lib/testcasemgmt/manualexecution.py @@ -0,0 +1,141 @@ +# test case management tool - manual execution from testopia test cases +# # Copyright (c) 2018, Intel Corporation. +# +# This program is free software; you can redistribute it and/or modify +it # under the terms and conditions of the GNU General Public License, +# version 2, as published by the Free Software Foundation. +# +# This program is distributed in the hope it will be useful, but +WITHOUT # ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or # FITNESS FOR A PARTICULAR PURPOSE. See the GNU +General Public License for # more details. +# +import argparse +import json +import os +import sys +import datetime +import scriptpath +import re +scriptpath.add_oe_lib_path() +scriptpath.add_bitbake_lib_path() +import bb.utils +from oeqa.core.runner import OETestResultJSONHelper + +class ManualTestRunner(object): + def __init__(self): + self.jdata = '' + self.testmodule = '' + self.testsuite = '' + self.testcase = '' + self.configuration = '' + self.host_distro = '' + self.host_name = '' + self.machine = '' + self.starttime = '' + self.result_id = '' + + def read_json(self, file): + self.jdata = json.load(open('%s' % file)) + self.testcase = [] + self.testmodule = self.jdata[0]['test']['@alias'].split('.', 2)[0] + self.testsuite = self.jdata[0]['test']['@alias'].split('.', 2)[1] + for i in range(0, len(self.jdata)): + self.testcase.append(self.jdata[i]['test']['@alias'].split('.', 2)[2]) + return self.jdata, self.testmodule, self.testsuite, + self.testcase + + def get_input(self, config): + while True: + output = input('{} = '.format(config)) + if re.match('^[a-zA-Z0-9_]+$', output): + break + print('Only alphanumeric and underscore are allowed. Please try again') + return output + + def create_config(self): + self.configuration = {} + while True: + try: + conf_total = int(input('\nPlease provide how many configuration you want to save \n')) + break + except ValueError: + print('Invalid input. Please provide input as a number not character.') + for i in range(conf_total): + print('---------------------------------------------') + print('This is your %s ' % (i + 1) + 'configuration. Please provide configuration name and its value') + print('---------------------------------------------') + self.name_conf = self.get_input('Configuration Name') + self.value_conf = self.get_input('Configuration Value') + print('---------------------------------------------\n') + self.configuration[self.name_conf.upper()] = self.value_conf + self.currentDT = datetime.datetime.now() + self.starttime = self.currentDT.strftime('%Y%m%d%H%M%S') + self.test_type = self.testmodule + self.configuration['STARTTIME'] = self.starttime + self.configuration['TEST_TYPE'] = self.test_type + return self.configuration + + def create_result_id(self): + self.result_id = 'manual_' + self.test_type + '_' + self.starttime + return self.result_id + + def execute_test_steps(self, testID): + temp = {} + testcaseID = self.testmodule + '.' + self.testsuite + '.' + self.testcase[testID] + print('------------------------------------------------------------------------') + print('Executing test case:' + '' '' + self.testcase[testID]) + print('------------------------------------------------------------------------') + print('You have total ' + max(self.jdata[testID]['test']['execution'].keys()) + ' test steps to be executed.') + + print('--------------------------------------------------------------- + ---------\n') + + for step in range(1, int(max(self.jdata[testID]['test']['execution'].keys()) + '1')): + print('Step %s: ' % step + self.jdata[testID]['test']['execution']['%s' % step]['action']) + print('Expected output: ' + self.jdata[testID]['test']['execution']['%s' % step]['expected_results']) + if step == int(max(self.jdata[testID]['test']['execution'].keys())): + done = input('\nPlease provide test results: (P)assed/(F)ailed/(B)locked? \n') + break + else: + done = input('\nPlease press ENTER when you are done to proceed to next step.\n') + step = step + 1 + + if done == 'p' or done == 'P': + res = 'PASSED' + elif done == 'f' or done == 'F': + res = 'FAILED' + log_input = input('\nPlease enter the error and the description of the log: (Ex:log:211 Error Bitbake)\n') + elif done == 'b' or done == 'B': + res = 'BLOCKED' + else: + res = 'SKIPPED' + + if res == 'FAILED': + temp.update({testcaseID: {'status': '%s' % res, 'log': '%s' % log_input}}) + else: + temp.update({testcaseID: {'status': '%s' % res}}) + return temp + +def manualexecution(args, logger): + basepath = os.getcwd() + write_dir = basepath + '/tmp/log/manual/' + sys.path.insert(0, write_dir) + manualtestcasejson = ManualTestRunner() + manualtestcasejson.read_json(args.file) + test_result = {} + manualtestcasejson.create_config() + manualtestcasejson.create_result_id() + print('\nTotal number of test cases in this test suite: ' + '%s\n' % len(manualtestcasejson.jdata)) + for i in range(0, len(manualtestcasejson.jdata)): + test_results = manualtestcasejson.execute_test_steps(i) + test_result.update(test_results) + resultjsonhelper = OETestResultJSONHelper() + resultjsonhelper.dump_testresult_file(write_dir, manualtestcasejson.configuration, manualtestcasejson.result_id, test_result) + return 0 + +def register_commands(subparsers): + """Register subcommands from this plugin""" + parser_build = subparsers.add_parser('manualexecution', help='Helper script for results populating during manual test execution.', + description='Helper script for results populating during manual test execution. You can find file in meta/lib/oeqa/manual/', + group='manualexecution') + parser_build.set_defaults(func=manualexecution) + parser_build.add_argument('file', help='Specify path to manual test +case JSON file.Note: Please use \"\" to encapsulate the file path.') -- 2.7.4 -- _______________________________________________ Openembedded-core mailing list Openembedded-core@lists.openembedded.org http://lists.openembedded.org/mailman/listinfo/openembedded-core ^ permalink raw reply related [flat|nested] 5+ messages in thread
* ✗ patchtest: failure for "scripts/test-case-mgmt: add "m..." and 1 more 2018-12-06 10:59 [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool mazliana.mohamad 2018-12-06 10:59 ` [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function mazliana.mohamad @ 2018-12-06 11:33 ` Patchwork 2018-12-06 12:08 ` FW: [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool Mohamad, Mazliana 2 siblings, 0 replies; 5+ messages in thread From: Patchwork @ 2018-12-06 11:33 UTC (permalink / raw) To: mazliana.mohamad; +Cc: openembedded-core == Series Details == Series: "scripts/test-case-mgmt: add "m..." and 1 more Revision: 1 URL : https://patchwork.openembedded.org/series/15244/ State : failure == Summary == Thank you for submitting this patch series to OpenEmbedded Core. This is an automated response. Several tests have been executed on the proposed series by patchtest resulting in the following failures: * Issue Series does not apply on top of target branch [test_series_merge_on_head] Suggested fix Rebase your series on top of targeted branch Targeted branch master (currently at bbe5099ba7) * Patch [2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function Issue Commit shortlog is too long [test_shortlog_length] Suggested fix Edit shortlog so that it is 90 characters or less (currently 92 characters) If you believe any of these test results are incorrect, please reply to the mailing list (openembedded-core@lists.openembedded.org) raising your concerns. Otherwise we would appreciate you correcting the issues and submitting a new version of the patchset if applicable. Please ensure you add/increment the version number when sending the new version (i.e. [PATCH] -> [PATCH v2] -> [PATCH v3] -> ...). --- Guidelines: https://www.openembedded.org/wiki/Commit_Patch_Message_Guidelines Test framework: http://git.yoctoproject.org/cgit/cgit.cgi/patchtest Test suite: http://git.yoctoproject.org/cgit/cgit.cgi/patchtest-oe ^ permalink raw reply [flat|nested] 5+ messages in thread
* FW: [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool 2018-12-06 10:59 [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool mazliana.mohamad 2018-12-06 10:59 ` [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function mazliana.mohamad 2018-12-06 11:33 ` ✗ patchtest: failure for "scripts/test-case-mgmt: add "m..." and 1 more Patchwork @ 2018-12-06 12:08 ` Mohamad, Mazliana 2 siblings, 0 replies; 5+ messages in thread From: Mohamad, Mazliana @ 2018-12-06 12:08 UTC (permalink / raw) To: openembedded-core@lists.openembedded.org Please ignore this patch -----Original Message----- From: openembedded-core-bounces@lists.openembedded.org [mailto:openembedded-core-bounces@lists.openembedded.org] On Behalf Of mazliana.mohamad@intel.com Sent: Thursday, December 6, 2018 6:59 PM To: openembedded-core@lists.openembedded.org Subject: [OE-core] [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool From: Mazliana <mazliana.mohamad@intel.com> Integrated the test-case-mgmt "store", "report" with "manual execution".Manual test execution is one of an alternative test case management tool of Testopia. This script has only a bare-minimum function. Bare-minimum function refer to function where the user can only execute all of the test cases that component have. To use these scripts, first source oe environment, then run the entry point script to look for help. $ test-case-mgmt To execute manual test cases, execute the below $ test-case-mgmt manualexecution <manualjsonfile> By default testresults.json store in poky/<build_dir>/tmp/log/manual [YOCTO #12651] Signed-off-by: Mazliana <mazliana.mohamad@intel.com> --- scripts/test-case-mgmt | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/scripts/test-case-mgmt b/scripts/test-case-mgmt index b832ee2..ef69db4 100755 --- a/scripts/test-case-mgmt +++ b/scripts/test-case-mgmt @@ -17,6 +17,11 @@ # To store test result & log, execute the below # $ test-case-mgmt store <source_dir> <git_branch> # +# To execute manual test cases, execute the below +# $ test-case-mgmt manualexecution <manualjsonfile> +# +# By default testresults.json for manualexecution in +poky/<build>/tmp/log/manual/ # # Copyright (c) 2018, Intel Corporation. # # This program is free software; you can redistribute it and/or modify it @@ -40,6 +45,7 @@ import argparse_oe import scriptutils import testcasemgmt.store import testcasemgmt.report +import testcasemgmt.manualexecution logger = scriptutils.logger_create('test-case-mgmt') def _validate_user_input_arguments(args): @@ -74,6 +80,8 @@ def main(): testcasemgmt.store.register_commands(subparsers) subparsers.add_subparser_group('report', 'Reporting for test result & log', 200) testcasemgmt.report.register_commands(subparsers) + subparsers.add_subparser_group('manualexecution', 'Execute manual test cases', 100) + testcasemgmt.manualexecution.register_commands(subparsers) args = parser.parse_args() if args.debug: logger.setLevel(logging.DEBUG) -- 2.7.4 -- _______________________________________________ Openembedded-core mailing list Openembedded-core@lists.openembedded.org http://lists.openembedded.org/mailman/listinfo/openembedded-core ^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2018-12-06 12:11 UTC | newest] Thread overview: 5+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2018-12-06 10:59 [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool mazliana.mohamad 2018-12-06 10:59 ` [PATCH 2/2] script/lib/testcasemgmt/manualexecution.py : manual helper script with bare-minimum function mazliana.mohamad 2018-12-06 12:07 ` Mohamad, Mazliana 2018-12-06 11:33 ` ✗ patchtest: failure for "scripts/test-case-mgmt: add "m..." and 1 more Patchwork 2018-12-06 12:08 ` FW: [PATCH 1/2] scripts/test-case-mgmt: add "manualexecution" as a tool Mohamad, Mazliana
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox