public inbox for kdevops@lists.linux.dev
 help / color / mirror / Atom feed
From: Luis Chamberlain <mcgrof@kernel.org>
To: Chuck Lever <cel@kernel.org>, Daniel Gomez <da.gomez@kruces.com>,
	kdevops@lists.linux.dev
Cc: Luis Chamberlain <mcgrof@kernel.org>,
	Daniel Gomez <da.gomez@samsung.com>
Subject: [PATCH v3 07/11] all: run black
Date: Fri,  1 Aug 2025 12:46:31 -0700	[thread overview]
Message-ID: <20250801194635.1598544-8-mcgrof@kernel.org> (raw)
In-Reply-To: <20250801194635.1598544-1-mcgrof@kernel.org>

Run black to fix tons of styling issues with tons of Python scripts.
In order to help bots ensure they don't add odd styling we need a
convention.

Acked-by: Daniel Gomez <da.gomez@samsung.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 .../blktests/augment_expunge_list.py          |  95 ++--
 .../workflows/blktests/gen-expunge-args.py    |  46 +-
 .../workflows/blktests/gen-results-dir.py     |  39 +-
 .../blktests/get_new_expunge_files.py         |  16 +-
 .../dynamic-kconfig/gen-dynamic-pci.py        |  89 ++--
 .../workflows/fstests/augment_expunge_list.py | 163 +++++--
 .../workflows/fstests/bad_files_summary.py    |  46 +-
 .../fstests/fstests-checktime-distribution.py |  46 +-
 .../workflows/fstests/gen_results_summary.py  | 132 +++---
 .../fstests/get_new_expunge_files.py          |  24 +-
 playbooks/python/workflows/fstests/lib/git.py |  21 +-
 .../workflows/fstests/xunit_merge_all.py      |  33 +-
 .../sysbench/sysbench-tps-compare.py          |  84 +++-
 .../workflows/sysbench/sysbench-tps-plot.py   |  36 +-
 .../sysbench/sysbench-tps-variance.py         | 435 +++++++++++++-----
 .../gen_pcie_passthrough_guestfs_xml.py       |  49 +-
 .../linux-mirror/python/gen-mirror-files.py   | 131 +++---
 .../linux-mirror/python/start-mirroring.py    | 116 +++--
 scripts/check_commit_format.py                |  28 +-
 .../generation/check_for_atomic_calls.py      |  71 +--
 .../generation/check_for_sleepy_calls.py      | 202 +++++---
 scripts/detect_whitespace_issues.py           |  38 +-
 scripts/fix_whitespace_issues.py              |  44 +-
 scripts/generate_refs.py                      |   6 +-
 scripts/honey-badger.py                       | 103 +++--
 scripts/spdxcheck.py                          | 201 ++++----
 scripts/update_ssh_config_guestfs.py          |  49 +-
 .../workflows/blktests/blktests_watchdog.py   |  75 ++-
 scripts/workflows/cxl/gen_qemu_cxl.py         | 235 +++++++---
 scripts/workflows/fstests/fstests_watchdog.py |  99 ++--
 scripts/workflows/generic/crash_report.py     |   4 +-
 scripts/workflows/generic/crash_watchdog.py   |  78 +++-
 scripts/workflows/lib/blktests.py             |  47 +-
 scripts/workflows/lib/crash.py                |  12 +-
 scripts/workflows/lib/fstests.py              | 155 ++++---
 scripts/workflows/lib/kssh.py                 | 178 ++++---
 scripts/workflows/lib/systemd_remote.py       | 101 ++--
 .../workflows/pynfs/check_pynfs_results.py    |  17 +-
 38 files changed, 2250 insertions(+), 1094 deletions(-)

diff --git a/playbooks/python/workflows/blktests/augment_expunge_list.py b/playbooks/python/workflows/blktests/augment_expunge_list.py
index 4fe856a3339f..a7b8eb42c874 100755
--- a/playbooks/python/workflows/blktests/augment_expunge_list.py
+++ b/playbooks/python/workflows/blktests/augment_expunge_list.py
@@ -15,58 +15,81 @@ import configparser
 from itertools import chain
 
 oscheck_ansible_python_dir = os.path.dirname(os.path.abspath(__file__))
-oscheck_sort_expunge = oscheck_ansible_python_dir + "/../../../scripts/workflows/fstests/sort-expunges.sh"
+oscheck_sort_expunge = (
+    oscheck_ansible_python_dir + "/../../../scripts/workflows/fstests/sort-expunges.sh"
+)
 top_dir = oscheck_ansible_python_dir + "/../../../../"
-blktests_last_kernel = top_dir + 'workflows/blktests/results/last-kernel.txt'
+blktests_last_kernel = top_dir + "workflows/blktests/results/last-kernel.txt"
 expunge_name = "failures.txt"
 
+
 def append_line(output_file, test_failure_line):
     # We want to now add entries like block/xxx where xxx are digits
     output = open(output_file, "a+")
     output.write("%s\n" % test_failure_line)
     output.close()
 
+
 def is_config_bool_true(config, name):
-    if name in config and config[name].strip('\"') == "y":
+    if name in config and config[name].strip('"') == "y":
         return True
     return False
 
+
 def config_string(config, name):
     if name in config:
-        return config[name].strip('\"')
+        return config[name].strip('"')
     return None
 
+
 def get_config(dotconfig):
-    config = configparser.ConfigParser(allow_no_value=True, strict=False, interpolation=None)
+    config = configparser.ConfigParser(
+        allow_no_value=True, strict=False, interpolation=None
+    )
     with open(dotconfig) as lines:
         lines = chain(("[top]",), lines)
         config.read_file(lines)
         return config["top"]
     return None
 
+
 def read_blktest_last_kernel():
     if not os.path.isfile(blktests_last_kernel):
         return None
-    kfile = open(blktests_last_kernel, 'r')
+    kfile = open(blktests_last_kernel, "r")
     all_lines = kfile.readlines()
     kfile.close()
     for line in all_lines:
         return line.strip()
     return None
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Augments expunge list for blktest')
-    parser.add_argument('results', metavar='<directory with results>', type=str,
-                        help='directory with results file')
-    parser.add_argument('outputdir', metavar='<output directory>', type=str,
-                        help='The directory where to generate the expunge failure.txt ')
-    parser.add_argument('--verbose', const=True, default=False, action="store_const",
-                        help='Print more verbose information')
+    parser = argparse.ArgumentParser(description="Augments expunge list for blktest")
+    parser.add_argument(
+        "results",
+        metavar="<directory with results>",
+        type=str,
+        help="directory with results file",
+    )
+    parser.add_argument(
+        "outputdir",
+        metavar="<output directory>",
+        type=str,
+        help="The directory where to generate the expunge failure.txt ",
+    )
+    parser.add_argument(
+        "--verbose",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Print more verbose information",
+    )
     args = parser.parse_args()
 
     expunge_kernel_dir = ""
 
-    dotconfig = top_dir + '/.config'
+    dotconfig = top_dir + "/.config"
     config = get_config(dotconfig)
     if not config:
         sys.stdout.write("%s does not exist\n" % (dotconfig))
@@ -85,7 +108,7 @@ def main():
                 continue
             if not os.path.isfile(f):
                 continue
-            if not f.endswith('.bad') and not f.endswith('.dmesg'):
+            if not f.endswith(".bad") and not f.endswith(".dmesg"):
                 continue
 
             bad_files.append(f)
@@ -97,13 +120,13 @@ def main():
         # f may be results/last-run/nodev/meta/009.dmesg
         bad_file_list = f.split("/")
         bad_file_list_len = len(bad_file_list) - 1
-        bad_file =      bad_file_list[bad_file_list_len]
-        test_group =    bad_file_list[bad_file_list_len-1]
-        bdev =          bad_file_list[bad_file_list_len-2]
+        bad_file = bad_file_list[bad_file_list_len]
+        test_group = bad_file_list[bad_file_list_len - 1]
+        bdev = bad_file_list[bad_file_list_len - 2]
 
         if args.verbose:
             sys.stdout.write("%s\n" % bad_file_list)
-            sys.stdout.write("\tbad_file: %s\n" %bad_file)
+            sys.stdout.write("\tbad_file: %s\n" % bad_file)
             sys.stdout.write("\ttest_group: %s\n" % test_group)
             sys.stdout.write("\tkernel: %s\n" % kernel)
 
@@ -126,11 +149,11 @@ def main():
             sys.exit(1)
 
         # This is like for example block/xxx where xxx are digits
-        test_failure_line = test_group + '/' + bad_file_test_number
+        test_failure_line = test_group + "/" + bad_file_test_number
 
         # now to stuff this into expunge files such as:
         # expunges/sles/15.3/failures.txt
-        expunge_kernel_dir = args.outputdir + '/' + kernel + '/'
+        expunge_kernel_dir = args.outputdir + "/" + kernel + "/"
         output_dir = expunge_kernel_dir
         output_file = output_dir + expunge_name
         shortcut_kernel_dir = None
@@ -145,19 +168,23 @@ def main():
                 sles_release_name = config_string(config, "CONFIG_KDEVOPS_HOSTS_PREFIX")
                 sles_release_parts = sles_release_name.split("sp")
                 if len(sles_release_parts) <= 1:
-                    sys.stderr.write("Unexpected sles_release_name: %s\n" % sles_release_name)
+                    sys.stderr.write(
+                        "Unexpected sles_release_name: %s\n" % sles_release_name
+                    )
                     sys.exit(1)
-                sles_point_release = sles_release_parts[0].split("sles")[1] + "." + sles_release_parts[1]
+                sles_point_release = (
+                    sles_release_parts[0].split("sles")[1] + "." + sles_release_parts[1]
+                )
 
                 # This becomes generic release directory, not specific to any
                 # kernel.
-                shortcut_dir = args.outputdir + '/' + "sles/" + sles_point_release + '/'
+                shortcut_dir = args.outputdir + "/" + "sles/" + sles_point_release + "/"
                 shortcut_kernel_dir = shortcut_dir
                 shortcut_file = shortcut_dir + expunge_name
             else:
                 ksplit = kernel.split(".")
                 shortcut_kernel = ksplit[0] + "." + ksplit[1] + "." + ksplit[2]
-                shortcut_kernel_dir = args.outputdir + '/' + shortcut_kernel + '/'
+                shortcut_kernel_dir = args.outputdir + "/" + shortcut_kernel + "/"
                 shortcut_dir = shortcut_kernel_dir
                 shortcut_file = shortcut_dir + expunge_name
 
@@ -170,10 +197,13 @@ def main():
                 os.makedirs(output_dir)
 
         if not os.path.isfile(output_file):
-            sys.stdout.write("====%s/%s new failure found file was empty\n" % (test_group, test_failure_line))
+            sys.stdout.write(
+                "====%s/%s new failure found file was empty\n"
+                % (test_group, test_failure_line)
+            )
             append_line(output_file, test_failure_line)
         else:
-            existing_file = open(output_file, 'r')
+            existing_file = open(output_file, "r")
             all_lines = existing_file.readlines()
             existing_file.close()
             found = False
@@ -182,13 +212,18 @@ def main():
                     found = True
                     break
             if not found:
-                sys.stdout.write("%s %s new failure found\n" % (test_group, test_failure_line))
+                sys.stdout.write(
+                    "%s %s new failure found\n" % (test_group, test_failure_line)
+                )
                 append_line(output_file, test_failure_line)
 
     if expunge_kernel_dir != "":
         sys.stdout.write("Sorting %s ...\n" % (expunge_kernel_dir))
-        sys.stdout.write("Running %s %s...\n" % (oscheck_sort_expunge, expunge_kernel_dir))
+        sys.stdout.write(
+            "Running %s %s...\n" % (oscheck_sort_expunge, expunge_kernel_dir)
+        )
         subprocess.call([oscheck_sort_expunge, expunge_kernel_dir])
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/blktests/gen-expunge-args.py b/playbooks/python/workflows/blktests/gen-expunge-args.py
index e73713f826dd..4d7f6d4c61d7 100755
--- a/playbooks/python/workflows/blktests/gen-expunge-args.py
+++ b/playbooks/python/workflows/blktests/gen-expunge-args.py
@@ -14,16 +14,37 @@ import os
 import sys
 import subprocess
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Generates expunge arguments to run blktests check based on results directory')
-    parser.add_argument('--test-group', metavar='<group>', type=str,
-                        help='group of tests to focus on otherwise all groups are considered')
-    parser.add_argument('results', metavar='<directory with blktests results>', type=str,
-                        help='directory with blktests results')
-    parser.add_argument('--gen-exclude-args', const=True, default=False, action="store_const",
-                        help='Generate exclude arguments so to be passed to blktests check')
-    parser.add_argument('--verbose', const=True, default=False, action="store_const",
-                        help='Print more verbose information')
+    parser = argparse.ArgumentParser(
+        description="Generates expunge arguments to run blktests check based on results directory"
+    )
+    parser.add_argument(
+        "--test-group",
+        metavar="<group>",
+        type=str,
+        help="group of tests to focus on otherwise all groups are considered",
+    )
+    parser.add_argument(
+        "results",
+        metavar="<directory with blktests results>",
+        type=str,
+        help="directory with blktests results",
+    )
+    parser.add_argument(
+        "--gen-exclude-args",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Generate exclude arguments so to be passed to blktests check",
+    )
+    parser.add_argument(
+        "--verbose",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Print more verbose information",
+    )
     args = parser.parse_args()
 
     bad_files = []
@@ -34,7 +55,7 @@ def main():
                 continue
             if not os.path.isfile(f):
                 continue
-            if f.endswith('.bad') or f.endswith('.dmesg'):
+            if f.endswith(".bad") or f.endswith(".dmesg"):
                 bad_files.append(f)
                 continue
     exclude_args = ""
@@ -58,12 +79,13 @@ def main():
         if args.test_group and args.test_group != group:
             continue
         if args.gen_exclude_args:
-            exclude_args += (" -x %s/%s" % (group, fail))
+            exclude_args += " -x %s/%s" % (group, fail)
         else:
             sys.stdout.write("%s/%s\n" % (group, fail))
 
     if args.gen_exclude_args:
         sys.stdout.write("%s\n" % (exclude_args))
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/blktests/gen-results-dir.py b/playbooks/python/workflows/blktests/gen-results-dir.py
index cb3e76aabce6..e18504bcffd0 100755
--- a/playbooks/python/workflows/blktests/gen-results-dir.py
+++ b/playbooks/python/workflows/blktests/gen-results-dir.py
@@ -18,7 +18,8 @@ oscheck_ansible_python_dir = os.path.dirname(os.path.abspath(__file__))
 top_dir = oscheck_ansible_python_dir + "/../../../../"
 results_dir = top_dir + "workflows/blktests/results/"
 last_run_dir = results_dir + "last-run/"
-blktests_last_kernel = top_dir + 'workflows/blktests/results/last-kernel.txt'
+blktests_last_kernel = top_dir + "workflows/blktests/results/last-kernel.txt"
+
 
 def clean_empty_dir(target_results):
     for i in range(1, 3):
@@ -31,12 +32,23 @@ def clean_empty_dir(target_results):
                 else:
                     clean_empty_dir(f)
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Get list of expunge files not yet committed in git')
-    parser.add_argument('--clean-dir-only', metavar='<clean_dir_only>', type=str, default='none',
-                        help='Do not perform an evaluation, just clean empty directories on the specified directory')
-    parser.add_argument('--copy-all', action='store_true',
-                        help='Copy all test results without filtering')
+    parser = argparse.ArgumentParser(
+        description="Get list of expunge files not yet committed in git"
+    )
+    parser.add_argument(
+        "--clean-dir-only",
+        metavar="<clean_dir_only>",
+        type=str,
+        default="none",
+        help="Do not perform an evaluation, just clean empty directories on the specified directory",
+    )
+    parser.add_argument(
+        "--copy-all",
+        action="store_true",
+        help="Copy all test results without filtering",
+    )
     args = parser.parse_args()
 
     if not os.path.isfile(blktests_last_kernel):
@@ -44,7 +56,7 @@ def main():
         sys.exit(1)
 
     kernel = None
-    f = open(blktests_last_kernel, 'r')
+    f = open(blktests_last_kernel, "r")
     for line in f:
         kernel = line.strip()
     if not line:
@@ -56,9 +68,11 @@ def main():
         clean_empty_dir(args.clean_dir_only)
         sys.exit(0)
 
-    target_results = results_dir + kernel + '/'
+    target_results = results_dir + kernel + "/"
     if not os.path.isdir(last_run_dir):
-        sys.stdout.write("Ignoring last-run directory %s as it is empty ...\n" % (last_run_dir))
+        sys.stdout.write(
+            "Ignoring last-run directory %s as it is empty ...\n" % (last_run_dir)
+        )
         sys.exit(0)
     sys.stdout.write("Copying %s to %s ...\n" % (last_run_dir, target_results))
     copytree(last_run_dir, target_results, dirs_exist_ok=True)
@@ -89,8 +103,8 @@ def main():
                     test_name = test_name_file_list[0]
 
                 test_dir = os.path.dirname(f)
-                name_lookup_base = test_dir + '/' + test_name + '*'
-                name_lookup = test_dir + '/' + test_name + '.*'
+                name_lookup_base = test_dir + "/" + test_name + "*"
+                name_lookup = test_dir + "/" + test_name + ".*"
                 listing = glob.glob(name_lookup)
                 bad_ext_found = False
                 if len(listing) > 0:
@@ -102,5 +116,6 @@ def main():
                         os.unlink(r)
     clean_empty_dir(target_results)
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/blktests/get_new_expunge_files.py b/playbooks/python/workflows/blktests/get_new_expunge_files.py
index 45f0e1a3c3d8..f1f1241bfc7b 100755
--- a/playbooks/python/workflows/blktests/get_new_expunge_files.py
+++ b/playbooks/python/workflows/blktests/get_new_expunge_files.py
@@ -14,10 +14,17 @@ import sys
 import subprocess
 from lib import git
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Get list of expunge files not yet committed in git')
-    parser.add_argument('expunge_dir', metavar='<directory with expunge files>', type=str,
-                        help='directory with expunge files')
+    parser = argparse.ArgumentParser(
+        description="Get list of expunge files not yet committed in git"
+    )
+    parser.add_argument(
+        "expunge_dir",
+        metavar="<directory with expunge files>",
+        type=str,
+        help="directory with expunge files",
+    )
     args = parser.parse_args()
 
     block_expunge_dir = args.expunge_dir
@@ -36,5 +43,6 @@ def main():
                     short_file = f.split("../")[1]
                 sys.stdout.write("%s\n" % (short_file))
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/dynamic-kconfig/gen-dynamic-pci.py b/playbooks/python/workflows/dynamic-kconfig/gen-dynamic-pci.py
index ed984ae1d778..107bb25b40a5 100755
--- a/playbooks/python/workflows/dynamic-kconfig/gen-dynamic-pci.py
+++ b/playbooks/python/workflows/dynamic-kconfig/gen-dynamic-pci.py
@@ -15,27 +15,31 @@ sys_bus_prefix = "/sys/bus/pci/devices/"
 
 debug = 0
 
+
 def get_first_dir(path):
     if len(os.listdir(path)) > 0:
         return os.listdir(path)[0]
     return None
 
+
 def get_sysname(sys_path, entry):
     sys_entry_path = sys_path + entry
     if not os.path.isfile(sys_entry_path):
         return None
-    entry_fd = open(sys_entry_path, 'r')
+    entry_fd = open(sys_entry_path, "r")
     line = entry_fd.readlines()[0]
     line = line.strip()
     entry_fd.close()
     return line
 
+
 # kconfig does not like some characters
 def strip_kconfig_name(name):
-    fixed_name = name.replace("\"", "")
+    fixed_name = name.replace('"', "")
     fixed_name = fixed_name.replace("'", "")
     return fixed_name
 
+
 def get_special_device_nvme(pci_id, IOMMUGroup):
     pci_id_name = strip_kconfig_name(pci_id)
     sys_path = sys_bus_prefix + pci_id + "/nvme/"
@@ -51,7 +55,13 @@ def get_special_device_nvme(pci_id, IOMMUGroup):
     fw = get_sysname(block_sys_path, "firmware_rev")
     if not fw:
         return None
-    return "%s IOMMU group %s - /dev/%s - %s with FW %s" % (pci_id_name, IOMMUGroup, block_device_name, model, fw)
+    return "%s IOMMU group %s - /dev/%s - %s with FW %s" % (
+        pci_id_name,
+        IOMMUGroup,
+        block_device_name,
+        model,
+        fw,
+    )
 
 
 def get_kconfig_device_name(pci_id, sdevice, IOMMUGroup):
@@ -63,26 +73,31 @@ def get_kconfig_device_name(pci_id, sdevice, IOMMUGroup):
         return strip_kconfig_name(default_name)
     return strip_kconfig_name(special_name)
 
+
 def add_pcie_kconfig_string(prefix, val, name):
     config_name = prefix + "_" + name.upper()
     sys.stdout.write("config %s\n" % (config_name))
     sys.stdout.write("\tstring\n")
-    sys.stdout.write("\tdefault \"%s\"\n" % (strip_kconfig_name(str(val))))
+    sys.stdout.write('\tdefault "%s"\n' % (strip_kconfig_name(str(val))))
     sys.stdout.write("\n")
 
+
 def add_pcie_kconfig_name(config_name, sdevice):
     sys.stdout.write("config %s\n" % (config_name))
-    sys.stdout.write("\tbool \"%s\"\n" % (sdevice))
+    sys.stdout.write('\tbool "%s"\n' % (sdevice))
     sys.stdout.write("\tdefault n\n")
     sys.stdout.write("\thelp\n")
     sys.stdout.write("\t  Enabling this will PCI-E passthrough this device onto the\n")
     sys.stdout.write("\t  target guest.\n")
     sys.stdout.write("\n")
 
+
 def add_pcie_kconfig_target(config_name, sdevice):
     sys.stdout.write("config %s_TARGET_GUEST\n" % (config_name))
-    sys.stdout.write("\tstring  \"Taret guest to offload %s\"\n" % (strip_kconfig_name(sdevice)))
-    sys.stdout.write("\tdefault \"\"\n")
+    sys.stdout.write(
+        '\tstring  "Taret guest to offload %s"\n' % (strip_kconfig_name(sdevice))
+    )
+    sys.stdout.write('\tdefault ""\n')
     sys.stdout.write("\tdepends on %s\n" % config_name)
     sys.stdout.write("\tdepends on KDEVOPS_LIBVIRT_PCIE_PASSTHROUGH_TYPE_EACH\n")
     sys.stdout.write("\thelp\n")
@@ -90,7 +105,10 @@ def add_pcie_kconfig_target(config_name, sdevice):
     sys.stdout.write("\t  target guest.\n")
     sys.stdout.write("\n")
 
-def add_pcie_kconfig_entry(pci_id, sdevice, domain, bus, slot, function, IOMMUGroup, config_id):
+
+def add_pcie_kconfig_entry(
+    pci_id, sdevice, domain, bus, slot, function, IOMMUGroup, config_id
+):
     prefix = passthrough_prefix + "_%04d" % config_id
     name = get_kconfig_device_name(pci_id, sdevice, IOMMUGroup)
     add_pcie_kconfig_name(prefix, name)
@@ -104,22 +122,23 @@ def add_pcie_kconfig_entry(pci_id, sdevice, domain, bus, slot, function, IOMMUGr
     add_pcie_kconfig_string(prefix, slot, "slot")
     add_pcie_kconfig_string(prefix, function, "function")
 
+
 def add_new_device(slot, sdevice, IOMMUGroup, possible_id):
     # Example expeced format 0000:2d:00.0
-    m = re.match(r"^(?P<DOMAIN>\w+):"
-                  "(?P<BUS>\w+):"
-                  "(?P<MSLOT>\w+)\."
-                  "(?P<FUNCTION>\w+)$", slot)
+    m = re.match(
+        r"^(?P<DOMAIN>\w+):" "(?P<BUS>\w+):" "(?P<MSLOT>\w+)\." "(?P<FUNCTION>\w+)$",
+        slot,
+    )
     if not m:
         return possible_id
 
     possible_id += 1
 
     slot_dict = m.groupdict()
-    domain = "0x" + slot_dict['DOMAIN']
-    bus = "0x" + slot_dict['BUS']
-    mslot = "0x" + slot_dict['MSLOT']
-    function = "0x" + slot_dict['FUNCTION']
+    domain = "0x" + slot_dict["DOMAIN"]
+    bus = "0x" + slot_dict["BUS"]
+    mslot = "0x" + slot_dict["MSLOT"]
+    function = "0x" + slot_dict["FUNCTION"]
 
     if debug:
         sys.stdout.write("\tslot: %s\n" % (slot))
@@ -130,17 +149,26 @@ def add_new_device(slot, sdevice, IOMMUGroup, possible_id):
         sys.stdout.write("\tIOMMUGroup: %s\n" % (IOMMUGroup))
 
     if possible_id == 1:
-        sys.stdout.write("# Automatically generated PCI-E passthrough Kconfig by kdevops\n\n")
+        sys.stdout.write(
+            "# Automatically generated PCI-E passthrough Kconfig by kdevops\n\n"
+        )
 
-    add_pcie_kconfig_entry(slot, sdevice, domain, bus, mslot, function, IOMMUGroup, possible_id)
+    add_pcie_kconfig_entry(
+        slot, sdevice, domain, bus, mslot, function, IOMMUGroup, possible_id
+    )
 
     return possible_id
 
+
 def main():
     num_candidate_devices = 0
-    parser = argparse.ArgumentParser(description='Creates a Kconfig file lspci output')
-    parser.add_argument('input', metavar='<input file with lspci -Dvmmm output>', type=str,
-                        help='input file wth lspci -Dvmmm output')
+    parser = argparse.ArgumentParser(description="Creates a Kconfig file lspci output")
+    parser.add_argument(
+        "input",
+        metavar="<input file with lspci -Dvmmm output>",
+        type=str,
+        help="input file wth lspci -Dvmmm output",
+    )
     args = parser.parse_args()
 
     lspci_output = args.input
@@ -149,7 +177,7 @@ def main():
         sys.stdout.write("input file did not exist: %s\n" % (lspci_output))
         sys.exit(1)
 
-    lspci = open(lspci_output, 'r')
+    lspci = open(lspci_output, "r")
     all_lines = lspci.readlines()
     lspci.close()
 
@@ -159,17 +187,18 @@ def main():
 
     for line in all_lines:
         line = line.strip()
-        m = re.match(r"^(?P<TAG>\w+):"
-                      "(?P<STRING>.*)$", line)
+        m = re.match(r"^(?P<TAG>\w+):" "(?P<STRING>.*)$", line)
         if not m:
             continue
         eval_line = m.groupdict()
-        tag = eval_line['TAG']
-        data = eval_line['STRING']
+        tag = eval_line["TAG"]
+        data = eval_line["STRING"]
         data = data.strip()
         if tag == "Slot":
             if sdevice:
-                num_candidate_devices = add_new_device(slot, sdevice, IOMMUGroup, num_candidate_devices)
+                num_candidate_devices = add_new_device(
+                    slot, sdevice, IOMMUGroup, num_candidate_devices
+                )
             slot = data
             sdevice = None
             IOMMUGroup = None
@@ -180,11 +209,13 @@ def main():
 
     # Handle the last device
     if sdevice and slot:
-        num_candidate_devices = add_new_device(slot, sdevice, IOMMUGroup, num_candidate_devices)
+        num_candidate_devices = add_new_device(
+            slot, sdevice, IOMMUGroup, num_candidate_devices
+        )
 
     add_pcie_kconfig_string(passthrough_prefix, num_candidate_devices, "NUM_DEVICES")
     os.unlink(lspci_output)
 
 
-if __name__ == '__main__':
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/fstests/augment_expunge_list.py b/playbooks/python/workflows/fstests/augment_expunge_list.py
index 9265cd8bc169..7f31401e0b88 100755
--- a/playbooks/python/workflows/fstests/augment_expunge_list.py
+++ b/playbooks/python/workflows/fstests/augment_expunge_list.py
@@ -16,44 +16,69 @@ import configparser
 from itertools import chain
 
 oscheck_ansible_python_dir = os.path.dirname(os.path.abspath(__file__))
-oscheck_sort_expunge = oscheck_ansible_python_dir + "/../../../scripts/workflows/fstests/sort-expunges.sh"
+oscheck_sort_expunge = (
+    oscheck_ansible_python_dir + "/../../../scripts/workflows/fstests/sort-expunges.sh"
+)
 top_dir = oscheck_ansible_python_dir + "/../../../../"
 
+
 def append_line(output_file, test_failure_line):
     # We want to now add entries like generic/xxx where xxx are digits
     output = open(output_file, "a+")
     output.write("%s\n" % test_failure_line)
     output.close()
 
+
 def is_config_bool_true(config, name):
-    if name in config and config[name].strip('\"') == "y":
+    if name in config and config[name].strip('"') == "y":
         return True
     return False
 
+
 def get_config(dotconfig):
-    config = configparser.ConfigParser(allow_no_value=True, strict=False, interpolation=None)
+    config = configparser.ConfigParser(
+        allow_no_value=True, strict=False, interpolation=None
+    )
     with open(dotconfig) as lines:
         lines = chain(("[top]",), lines)
         config.read_file(lines)
         return config["top"]
     return None
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Augments expunge list for oscheck')
-    parser.add_argument('filesystem', metavar='<filesystem name>', type=str,
-                        help='filesystem which was tested')
-    parser.add_argument('results', metavar='<directory with results>', type=str,
-                        help='directory with results file')
-    parser.add_argument('outputdir', metavar='<output directory>', type=str,
-                        help='The directory where to generate the expunge lists to')
-    parser.add_argument('--verbose', const=True, default=False, action="store_const",
-                        help='Print more verbose information')
+    parser = argparse.ArgumentParser(description="Augments expunge list for oscheck")
+    parser.add_argument(
+        "filesystem",
+        metavar="<filesystem name>",
+        type=str,
+        help="filesystem which was tested",
+    )
+    parser.add_argument(
+        "results",
+        metavar="<directory with results>",
+        type=str,
+        help="directory with results file",
+    )
+    parser.add_argument(
+        "outputdir",
+        metavar="<output directory>",
+        type=str,
+        help="The directory where to generate the expunge lists to",
+    )
+    parser.add_argument(
+        "--verbose",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Print more verbose information",
+    )
     args = parser.parse_args()
 
     expunge_kernel_dir = ""
 
-#    all_files = os.listdir(args.results)
-    dotconfig = top_dir + '/.config'
+    #    all_files = os.listdir(args.results)
+    dotconfig = top_dir + "/.config"
     config = get_config(dotconfig)
     if not config:
         sys.stdout.write("%s does not exist\n" % (dotconfig))
@@ -67,7 +92,7 @@ def main():
                 continue
             if not os.path.isfile(f):
                 continue
-            if f.endswith('.bad') or f.endswith('.dmesg'):
+            if f.endswith(".bad") or f.endswith(".dmesg"):
                 bad_files.append(f)
     for f in bad_files:
         if args.verbose:
@@ -78,15 +103,15 @@ def main():
         # where xxx are digits
         bad_file_list = f.split("/")
         bad_file_list_len = len(bad_file_list) - 1
-        bad_file =      bad_file_list[bad_file_list_len]
-        test_group =    bad_file_list[bad_file_list_len-1]
-        section =       bad_file_list[bad_file_list_len-2]
-        kernel =        bad_file_list[bad_file_list_len-3]
-        hostname =      bad_file_list[bad_file_list_len-4]
+        bad_file = bad_file_list[bad_file_list_len]
+        test_group = bad_file_list[bad_file_list_len - 1]
+        section = bad_file_list[bad_file_list_len - 2]
+        kernel = bad_file_list[bad_file_list_len - 3]
+        hostname = bad_file_list[bad_file_list_len - 4]
 
         if args.verbose:
             sys.stdout.write("%s\n" % bad_file_list)
-            sys.stdout.write("\tbad_file: %s\n" %bad_file)
+            sys.stdout.write("\tbad_file: %s\n" % bad_file)
             sys.stdout.write("\ttest_group: %s\n" % test_group)
             sys.stdout.write("\tsection: %s\n" % section)
             sys.stdout.write("\thostname: %s\n" % hostname)
@@ -94,20 +119,22 @@ def main():
         bad_file_parts = bad_file.split(".")
         bad_file_test_number = bad_file_parts[0]
         # This is like for example generic/xxx where xxx are digits
-        test_failure_line = test_group + '/' + bad_file_test_number
+        test_failure_line = test_group + "/" + bad_file_test_number
 
         # now to stuff this into expunge files such as:
         # path/4.19.17/xfs/unassigned/xfs_nocrc.txt
-        expunge_kernel_dir = args.outputdir + '/' + kernel + '/' + args.filesystem + '/'
-        output_dir = expunge_kernel_dir + 'unassigned/'
-        output_file = output_dir + section + '.txt'
+        expunge_kernel_dir = args.outputdir + "/" + kernel + "/" + args.filesystem + "/"
+        output_dir = expunge_kernel_dir + "unassigned/"
+        output_file = output_dir + section + ".txt"
 
         base_kernel = kernel
         if base_kernel.endswith("+"):
             base_kernel = kernel.replace("+", "")
-            base_expunge_kernel_dir = args.outputdir + '/' + base_kernel + '/' + args.filesystem + '/'
-            base_output_dir = base_expunge_kernel_dir + 'unassigned/'
-            base_output_file = base_output_dir + section + '.txt'
+            base_expunge_kernel_dir = (
+                args.outputdir + "/" + base_kernel + "/" + args.filesystem + "/"
+            )
+            base_output_dir = base_expunge_kernel_dir + "unassigned/"
+            base_output_file = base_output_dir + section + ".txt"
 
         shortcut_kernel_dir = None
         shortcut_dir = None
@@ -124,24 +151,39 @@ def main():
                 sles_release_name = sles_release_parts[0]
                 sles_release_parts = sles_release_name.split("sp")
                 if len(sles_release_parts) <= 1:
-                    sys.stderr.write("Unexpected sles_release_name: %s\n" % sles_release_name)
+                    sys.stderr.write(
+                        "Unexpected sles_release_name: %s\n" % sles_release_name
+                    )
                     sys.exit(1)
                 sles_point_release = sles_release_parts[0] + "." + sles_release_parts[1]
 
                 # This becomes generic release directory, not specific to any
                 # kernel.
-                shortcut_kernel_dir = args.outputdir + '/' + "sles/" + sles_point_release + '/' + args.filesystem + '/'
+                shortcut_kernel_dir = (
+                    args.outputdir
+                    + "/"
+                    + "sles/"
+                    + sles_point_release
+                    + "/"
+                    + args.filesystem
+                    + "/"
+                )
 
-                shortcut_dir = shortcut_kernel_dir + 'unassigned/'
-                shortcut_file = shortcut_dir + section + '.txt'
+                shortcut_dir = shortcut_kernel_dir + "unassigned/"
+                shortcut_file = shortcut_dir + section + ".txt"
             else:
                 ksplit = kernel.split(".")
                 shortcut_kernel = ksplit[0] + "." + ksplit[1] + "." + ksplit[2]
-                shortcut_kernel_dir = args.outputdir + '/' + shortcut_kernel + '/' + args.filesystem + '/'
-                shortcut_dir = shortcut_kernel_dir + 'unassigned/'
-                shortcut_file = shortcut_dir + section + '.txt'
+                shortcut_kernel_dir = (
+                    args.outputdir + "/" + shortcut_kernel + "/" + args.filesystem + "/"
+                )
+                shortcut_dir = shortcut_kernel_dir + "unassigned/"
+                shortcut_file = shortcut_dir + section + ".txt"
         elif is_config_bool_true(config, "CONFIG_LIBVIRT_OPENSUSE"):
-            if is_config_bool_true(config, "CONFIG_WORKFLOW_KOTD_ENABLE") and "leap" in hostname:
+            if (
+                is_config_bool_true(config, "CONFIG_WORKFLOW_KOTD_ENABLE")
+                and "leap" in hostname
+            ):
                 leap_host_parts = hostname.split("leap")
                 if len(leap_host_parts) <= 1:
                     sys.stderr.write("Invalid hostname: %s\n" % hostname)
@@ -150,22 +192,34 @@ def main():
                 leap_release_name = leap_release_parts[0]
                 leap_release_parts = leap_release_name.split("sp")
                 if len(leap_release_parts) <= 1:
-                    sys.stderr.write("Unexpected sles_release_name: %s\n" % leap_release_name)
+                    sys.stderr.write(
+                        "Unexpected sles_release_name: %s\n" % leap_release_name
+                    )
                     sys.exit(1)
                 leap_point_release = leap_release_parts[0] + "." + leap_release_parts[1]
 
                 # This becomes generic release directory, not specific to any
                 # kernel.
-                shortcut_kernel_dir = args.outputdir + '/' + "opensuse-leap/" + leap_point_release + '/' + args.filesystem + '/'
+                shortcut_kernel_dir = (
+                    args.outputdir
+                    + "/"
+                    + "opensuse-leap/"
+                    + leap_point_release
+                    + "/"
+                    + args.filesystem
+                    + "/"
+                )
 
-                shortcut_dir = shortcut_kernel_dir + 'unassigned/'
-                shortcut_file = shortcut_dir + section + '.txt'
+                shortcut_dir = shortcut_kernel_dir + "unassigned/"
+                shortcut_file = shortcut_dir + section + ".txt"
             else:
                 ksplit = kernel.split(".")
                 shortcut_kernel = ksplit[0] + "." + ksplit[1] + "." + ksplit[2]
-                shortcut_kernel_dir = args.outputdir + '/' + shortcut_kernel + '/' + args.filesystem + '/'
-                shortcut_dir = shortcut_kernel_dir + 'unassigned/'
-                shortcut_file = shortcut_dir + section + '.txt'
+                shortcut_kernel_dir = (
+                    args.outputdir + "/" + shortcut_kernel + "/" + args.filesystem + "/"
+                )
+                shortcut_dir = shortcut_kernel_dir + "unassigned/"
+                shortcut_file = shortcut_dir + section + ".txt"
 
         if not os.path.isdir(output_dir):
             if shortcut_dir and os.path.isdir(shortcut_dir):
@@ -173,7 +227,10 @@ def main():
                 output_file = shortcut_file
                 expunge_kernel_dir = shortcut_kernel_dir
             elif base_kernel != kernel and os.path.isdir(base_output_dir):
-                sys.stdout.write("<== expunges for %s not found but found base kernel %s expunge directory ==>\n" % (kernel, base_kernel))
+                sys.stdout.write(
+                    "<== expunges for %s not found but found base kernel %s expunge directory ==>\n"
+                    % (kernel, base_kernel)
+                )
                 expunge_kernel_dir = base_expunge_kernel_dir
                 output_dir = base_output_dir
                 output_file = base_output_file
@@ -182,10 +239,13 @@ def main():
                 os.makedirs(output_dir)
 
         if not os.path.isfile(output_file):
-            sys.stdout.write("====%s %s new failure found file was empty\n" % (section, test_failure_line))
+            sys.stdout.write(
+                "====%s %s new failure found file was empty\n"
+                % (section, test_failure_line)
+            )
             append_line(output_file, test_failure_line)
         else:
-            existing_file = open(output_file, 'r')
+            existing_file = open(output_file, "r")
             all_lines = existing_file.readlines()
             existing_file.close()
             found = False
@@ -194,13 +254,18 @@ def main():
                     found = True
                     break
             if not found:
-                sys.stdout.write("%s %s new failure found\n" % (section, test_failure_line))
+                sys.stdout.write(
+                    "%s %s new failure found\n" % (section, test_failure_line)
+                )
                 append_line(output_file, test_failure_line)
 
     if expunge_kernel_dir != "":
         sys.stdout.write("Sorting %s ...\n" % (expunge_kernel_dir))
-        sys.stdout.write("Running %s %s...\n" % (oscheck_sort_expunge, expunge_kernel_dir))
+        sys.stdout.write(
+            "Running %s %s...\n" % (oscheck_sort_expunge, expunge_kernel_dir)
+        )
         subprocess.call([oscheck_sort_expunge, expunge_kernel_dir])
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/fstests/bad_files_summary.py b/playbooks/python/workflows/fstests/bad_files_summary.py
index 164409b52466..d0cf4058d596 100755
--- a/playbooks/python/workflows/fstests/bad_files_summary.py
+++ b/playbooks/python/workflows/fstests/bad_files_summary.py
@@ -11,6 +11,7 @@ import argparse
 import os
 import sys
 
+
 def parse_results_ascii(sections, results, kernel, filesystem):
     sys.stdout.write("%s on %s\n" % (filesystem, kernel))
     for section in sections:
@@ -18,6 +19,7 @@ def parse_results_ascii(sections, results, kernel, filesystem):
         for test in results[section]:
             sys.stdout.write("\t%s\n" % test)
 
+
 def parse_results_html(sections, results, kernel, filesystem):
     sys.stdout.write("<html><title>%s on %s</title><body>" % (filesystem, kernel))
     sys.stdout.write("<h1>%s on %s</h1>\n" % (filesystem, kernel))
@@ -33,15 +35,28 @@ def parse_results_html(sections, results, kernel, filesystem):
             sys.stdout.write("</tr>\n")
     sys.stdout.write("</table></body></html>")
 
+
 def main():
-    parser = argparse.ArgumentParser(description='generate html file from results')
-    parser.add_argument('filesystem', metavar='<filesystem name>', type=str,
-                        help='filesystem which was tested')
-    parser.add_argument('results', metavar='<directory with results>', type=str,
-                        help='directory with results file')
-    parser.add_argument('--format', metavar='<output format>', type=str,
-                        help='Output format: ascii html, the default is ascii',
-                        default='txt')
+    parser = argparse.ArgumentParser(description="generate html file from results")
+    parser.add_argument(
+        "filesystem",
+        metavar="<filesystem name>",
+        type=str,
+        help="filesystem which was tested",
+    )
+    parser.add_argument(
+        "results",
+        metavar="<directory with results>",
+        type=str,
+        help="directory with results file",
+    )
+    parser.add_argument(
+        "--format",
+        metavar="<output format>",
+        type=str,
+        help="Output format: ascii html, the default is ascii",
+        default="txt",
+    )
     args = parser.parse_args()
     results = dict()
     sections = list()
@@ -51,27 +66,27 @@ def main():
     for root, dirs, all_files in os.walk(args.results):
         for fname in all_files:
             f = os.path.join(root, fname)
-            #sys.stdout.write("%s\n" % f)
+            # sys.stdout.write("%s\n" % f)
             if os.path.isdir(f):
                 continue
             if not os.path.isfile(f):
                 continue
-            if not f.endswith('.bad'):
+            if not f.endswith(".bad"):
                 continue
 
             # f may be results/oscheck-xfs/4.19.0-4-amd64/xfs/generic/091.out.bad
             bad_file_list = f.split("/")
             bad_file_list_len = len(bad_file_list) - 1
             bad_file = bad_file_list[bad_file_list_len]
-            test_type = bad_file_list[bad_file_list_len-1]
-            section = bad_file_list[bad_file_list_len-2]
-            kernel = bad_file_list[bad_file_list_len-3]
+            test_type = bad_file_list[bad_file_list_len - 1]
+            section = bad_file_list[bad_file_list_len - 2]
+            kernel = bad_file_list[bad_file_list_len - 3]
 
             bad_file_parts = bad_file.split(".")
             bad_file_part_len = len(bad_file_parts) - 1
             bad_file_test_number = bad_file_parts[bad_file_part_len - 2]
             # This is like for example generic/091
-            test_failure_line = test_type + '/' + bad_file_test_number
+            test_failure_line = test_type + "/" + bad_file_test_number
 
             test_section = results.get(section)
             if not test_section:
@@ -86,5 +101,6 @@ def main():
     else:
         parse_results_ascii(sections, results, kernel, args.filesystem)
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/fstests/fstests-checktime-distribution.py b/playbooks/python/workflows/fstests/fstests-checktime-distribution.py
index b84e0a290110..75025fa2a541 100755
--- a/playbooks/python/workflows/fstests/fstests-checktime-distribution.py
+++ b/playbooks/python/workflows/fstests/fstests-checktime-distribution.py
@@ -16,12 +16,21 @@ import subprocess
 import collections
 
 oscheck_ansible_python_dir = os.path.dirname(os.path.abspath(__file__))
-oscheck_sort_expunge = oscheck_ansible_python_dir + "/../../../scripts/workflows/fstests/sort-expunges.sh"
+oscheck_sort_expunge = (
+    oscheck_ansible_python_dir + "/../../../scripts/workflows/fstests/sort-expunges.sh"
+)
+
 
 def main():
-    parser = argparse.ArgumentParser(description='Creates check.time.distribution files for all found check.time files')
-    parser.add_argument('results', metavar='<directory with check.time files>', type=str,
-                        help='directory with check.time files')
+    parser = argparse.ArgumentParser(
+        description="Creates check.time.distribution files for all found check.time files"
+    )
+    parser.add_argument(
+        "results",
+        metavar="<directory with check.time files>",
+        type=str,
+        help="directory with check.time files",
+    )
     args = parser.parse_args()
 
     expunge_kernel_dir = ""
@@ -31,22 +40,22 @@ def main():
     for root, dirs, all_files in os.walk(args.results):
         for fname in all_files:
             f = os.path.join(root, fname)
-            #sys.stdout.write("%s\n" % f)
+            # sys.stdout.write("%s\n" % f)
             if os.path.isdir(f):
                 continue
             if not os.path.isfile(f):
                 continue
-            if not f.endswith('check.time'):
+            if not f.endswith("check.time"):
                 continue
 
             # f may be results/oscheck-xfs/4.19.0-4-amd64/check.time
-            time_distribution = f + '.distribution'
+            time_distribution = f + ".distribution"
 
             if os.path.isfile(time_distribution):
                 os.unlink(time_distribution)
 
-            checktime = open(f, 'r')
-            distribution = open(time_distribution, 'w')
+            checktime = open(f, "r")
+            distribution = open(time_distribution, "w")
 
             sys.stdout.write("checktime: %s\n" % f)
 
@@ -57,17 +66,17 @@ def main():
             num_tests = 0
             for line in all_lines:
                 line = line.strip()
-                m = re.match(r"^(?P<GROUP>\w+)/"
-                              "(?P<NUMBER>\d+)\s+"
-                              "(?P<TIME>\d+)$", line)
+                m = re.match(
+                    r"^(?P<GROUP>\w+)/" "(?P<NUMBER>\d+)\s+" "(?P<TIME>\d+)$", line
+                )
                 if not m:
                     continue
                 testline = m.groupdict()
                 num_tests += 1
-                if int(testline['TIME']) in results:
-                    results[int(testline['TIME'])] += 1
+                if int(testline["TIME"]) in results:
+                    results[int(testline["TIME"])] += 1
                 else:
-                    results[int(testline['TIME'])] = 1
+                    results[int(testline["TIME"])] = 1
             od = collections.OrderedDict(sorted(results.items()))
 
             v_total = 0
@@ -76,8 +85,11 @@ def main():
                 v_total += v
 
             if num_tests != v_total:
-                sys.stdout.write("Unexpected error, total tests: %d but computed sum test: %d\n" % (num_tests, v_total))
+                sys.stdout.write(
+                    "Unexpected error, total tests: %d but computed sum test: %d\n"
+                    % (num_tests, v_total)
+                )
 
 
-if __name__ == '__main__':
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/fstests/gen_results_summary.py b/playbooks/python/workflows/fstests/gen_results_summary.py
index 28dc064c6419..c0d0702b7e6a 100644
--- a/playbooks/python/workflows/fstests/gen_results_summary.py
+++ b/playbooks/python/workflows/fstests/gen_results_summary.py
@@ -20,22 +20,25 @@ import time
 from datetime import datetime
 from junitparser import JUnitXml, Property, Properties, Failure, Error, Skipped
 
+
 def get_results(dirroot, results_file):
     """Return a list of files named results_file in a directory hierarchy"""
     for dirpath, _dirs, filenames in os.walk(dirroot):
         if results_file in filenames:
-            yield dirpath + '/' + results_file
+            yield dirpath + "/" + results_file
+
 
 def parse_timestamp(timestamp):
     """Parse an ISO-8601-like timestamp as found in an xUnit file."""
     if timestamp == "":
         return 0
-    for fmt in ('%Y-%m-%dT%H:%M:%S%z', '%Y-%m-%dT%H:%M:%S'):
+    for fmt in ("%Y-%m-%dT%H:%M:%S%z", "%Y-%m-%dT%H:%M:%S"):
         try:
             return time.mktime(datetime.strptime(timestamp, fmt).timetuple())
         except ValueError:
             pass
-    raise ValueError('no valid timestamp format found')
+    raise ValueError("no valid timestamp format found")
+
 
 def failed_tests(testsuite):
     """This iterator the failed tests from the testsuite."""
@@ -43,6 +46,7 @@ def failed_tests(testsuite):
         if isinstance(testcase.result, Failure):
             yield testcase
 
+
 def get_property(props, key):
     """Return the value of the first property with the given name"""
     if props is None:
@@ -52,6 +56,7 @@ def get_property(props, key):
             return prop.value
     return None
 
+
 def get_properties(props, key):
     """An interator which returns values of properties with a given name."""
     if props is None:
@@ -60,6 +65,7 @@ def get_properties(props, key):
         if prop.name == key:
             yield prop.value
 
+
 def remove_properties(props, key):
     """Remove properties with a given name."""
     if props is None:
@@ -68,6 +74,7 @@ def remove_properties(props, key):
         if prop.name == key:
             props.remove(prop)
 
+
 def print_tests(out_f, testsuite, result_type, type_label):
     """Print all of the tests which match a particular result_type"""
     found = False
@@ -81,17 +88,18 @@ def print_tests(out_f, testsuite, result_type, type_label):
         if result is None:
             continue
         if not found:
-            out_f.write('  %s: ' % type_label)
+            out_f.write("  %s: " % type_label)
             pos = len(type_label) + 4
             found = True
         name_len = len(testcase.name) + 1
         pos += name_len + 1
         if pos > 76:
-            out_f.write('\n    ')
+            out_f.write("\n    ")
             pos = name_len + 5
-        out_f.write(testcase.name + ' ')
+        out_f.write(testcase.name + " ")
     if found:
-        out_f.write('\n')
+        out_f.write("\n")
+
 
 def total_tests(testsuites):
     """Print the total number of tests in an array of testsuites"""
@@ -101,6 +109,7 @@ def total_tests(testsuites):
             total += testsuite.tests
     return total
 
+
 def sum_testsuites(testsuites):
     """Summarize all of the test suite statistics"""
     runtime = 0
@@ -116,6 +125,7 @@ def sum_testsuites(testsuites):
         errors += testsuite.errors
     return (tests, skipped, failures, errors, runtime)
 
+
 def print_summary(out_f, testsuite, verbose, print_section):
     """Print a summary for a particular test suite
 
@@ -126,9 +136,9 @@ def print_summary(out_f, testsuite, verbose, print_section):
     ext4/bigalloc 244 tests, 25 skipped, 5 errors, 880 seconds
        generic/219 generic/235 generic/422 generic/451 generic/456
     """
-    cfg = get_property(testsuite.properties(), 'TESTCFG')
+    cfg = get_property(testsuite.properties(), "TESTCFG")
     if cfg is None:
-        cfg = get_property(testsuite.properties(), 'FSTESTCFG')
+        cfg = get_property(testsuite.properties(), "FSTESTCFG")
 
     runtime = testsuite.time
     tests = testsuite.tests
@@ -139,70 +149,74 @@ def print_summary(out_f, testsuite, verbose, print_section):
         for test_case in testsuite:
             classname = test_case.classname
             class_list = classname.split(".")
-            section = class_list[len(class_list)-1]
+            section = class_list[len(class_list) - 1]
             break
-        out_f.write('%s: %d tests, ' % (section, tests))
+        out_f.write("%s: %d tests, " % (section, tests))
     else:
-        out_f.write('%s: %d tests, ' % (cfg, tests))
+        out_f.write("%s: %d tests, " % (cfg, tests))
     if failures > 0:
-        out_f.write('%d failures, ' % failures)
+        out_f.write("%d failures, " % failures)
     if errors > 0:
-        out_f.write('%d errors, ' % errors)
+        out_f.write("%d errors, " % errors)
     if skipped > 0:
-        out_f.write('%d skipped, ' % skipped)
+        out_f.write("%d skipped, " % skipped)
     if runtime is None:
         runtime = 0
-    out_f.write('%d seconds\n' % runtime)
+    out_f.write("%d seconds\n" % runtime)
     if verbose:
         for test_case in testsuite:
-            status = 'Pass'
+            status = "Pass"
             for result in test_case.result:
                 if isinstance(result, Failure):
-                    status = 'Failed'
+                    status = "Failed"
                 if isinstance(result, Skipped):
-                    status = 'Skipped'
+                    status = "Skipped"
                 if isinstance(result, Error):
-                    status = 'Error'
-            out_f.write("  %-12s %-8s %ds\n" %
-                        (test_case.name, status, test_case.time))
+                    status = "Error"
+            out_f.write("  %-12s %-8s %ds\n" % (test_case.name, status, test_case.time))
     else:
         if failures > 0:
-            print_tests(out_f, testsuite, Failure, 'Failures')
+            print_tests(out_f, testsuite, Failure, "Failures")
         if errors > 0:
-            print_tests(out_f, testsuite, Error, 'Errors')
+            print_tests(out_f, testsuite, Error, "Errors")
+
 
 def print_property_line(out_f, props, key):
     """Print a line containing the given property."""
     value = get_property(props, key)
     if value is not None and value != "":
-        out_f.write('%-10s %s\n' % (key + ':', value))
+        out_f.write("%-10s %s\n" % (key + ":", value))
+
 
 def print_properties(out_f, props, key):
     """Print multiple property lines."""
     for value in get_properties(props, key):
-        out_f.write('%-10s %s\n' % (key + ':', value))
+        out_f.write("%-10s %s\n" % (key + ":", value))
+
 
 def print_header(out_f, props):
     """Print the header of the report."""
-    print_property_line(out_f, props, 'TESTRUNID')
-    print_property_line(out_f, props, 'KERNEL')
-    print_property_line(out_f, props, 'CMDLINE')
-    print_property_line(out_f, props, 'CPUS')
-    print_property_line(out_f, props, 'MEM')
-    print_property_line(out_f, props, 'MNTOPTS')
-    out_f.write('\n')
+    print_property_line(out_f, props, "TESTRUNID")
+    print_property_line(out_f, props, "KERNEL")
+    print_property_line(out_f, props, "CMDLINE")
+    print_property_line(out_f, props, "CPUS")
+    print_property_line(out_f, props, "MEM")
+    print_property_line(out_f, props, "MNTOPTS")
+    out_f.write("\n")
+
 
 def print_trailer(out_f, props):
     """Print the trailer of the report."""
-    out_f.write('\n')
-    print_property_line(out_f, props, 'FSTESTIMG')
-    print_property_line(out_f, props, 'FSTESTPRJ')
-    print_properties(out_f, props, 'FSTESTVER')
-    print_property_line(out_f, props, 'FSTESTCFG')
-    print_property_line(out_f, props, 'FSTESTSET')
-    print_property_line(out_f, props, 'FSTESTEXC')
-    print_property_line(out_f, props, 'FSTESTOPT')
-    print_property_line(out_f, props, 'GCE ID')
+    out_f.write("\n")
+    print_property_line(out_f, props, "FSTESTIMG")
+    print_property_line(out_f, props, "FSTESTPRJ")
+    print_properties(out_f, props, "FSTESTVER")
+    print_property_line(out_f, props, "FSTESTCFG")
+    print_property_line(out_f, props, "FSTESTSET")
+    print_property_line(out_f, props, "FSTESTEXC")
+    print_property_line(out_f, props, "FSTESTOPT")
+    print_property_line(out_f, props, "GCE ID")
+
 
 def check_for_ltm(results_dir, props):
     """Check to see if the results directory was created by the LTM and
@@ -210,15 +224,15 @@ def check_for_ltm(results_dir, props):
     mode.
     """
     try:
-        out_f = open(os.path.join(results_dir, 'ltm-run-stats'))
+        out_f = open(os.path.join(results_dir, "ltm-run-stats"))
         for line in out_f:
-            key, value = line.split(': ', 1)
-            value = value.rstrip('\n').strip('"')
+            key, value = line.split(": ", 1)
+            value = value.rstrip("\n").strip('"')
             remove_properties(props, key)
             props.add_property(Property(key, value))
         out_f.close()
-        remove_properties(props, 'GCE ID')
-        remove_properties(props, 'FSTESTCFG')
+        remove_properties(props, "GCE ID")
+        remove_properties(props, "FSTESTCFG")
         return True
     except IOError:
         try:
@@ -227,9 +241,15 @@ def check_for_ltm(results_dir, props):
             pass
         return False
 
-def gen_results_summary(results_dir, output_fn=None, merge_fn=None,
-                        verbose=False, print_section=False,
-                        results_file='results.xml'):
+
+def gen_results_summary(
+    results_dir,
+    output_fn=None,
+    merge_fn=None,
+    verbose=False,
+    print_section=False,
+    results_file="results.xml",
+):
     """Scan a results directory and generate a summary file"""
     reports = []
     combined = JUnitXml()
@@ -263,16 +283,18 @@ def gen_results_summary(results_dir, output_fn=None, merge_fn=None,
         combined.add_testsuite(testsuite)
         nr_files += 1
 
-    out_f.write('Totals: %d tests, %d skipped, %d failures, %d errors, %ds\n' \
-                % sum_testsuites(reports))
+    out_f.write(
+        "Totals: %d tests, %d skipped, %d failures, %d errors, %ds\n"
+        % sum_testsuites(reports)
+    )
 
     print_trailer(out_f, props)
 
     if merge_fn is not None:
         combined.update_statistics()
-        combined.write(merge_fn + '.new')
+        combined.write(merge_fn + ".new")
         if os.path.exists(merge_fn):
-            os.rename(merge_fn, merge_fn + '.bak')
-        os.rename(merge_fn + '.new', merge_fn)
+            os.rename(merge_fn, merge_fn + ".bak")
+        os.rename(merge_fn + ".new", merge_fn)
 
     return nr_files
diff --git a/playbooks/python/workflows/fstests/get_new_expunge_files.py b/playbooks/python/workflows/fstests/get_new_expunge_files.py
index a01330a2d32a..5b164044e1da 100755
--- a/playbooks/python/workflows/fstests/get_new_expunge_files.py
+++ b/playbooks/python/workflows/fstests/get_new_expunge_files.py
@@ -14,12 +14,23 @@ import sys
 import subprocess
 from lib import git
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Get list of expunge files not yet committed in git')
-    parser.add_argument('filesystem', metavar='<filesystem name>', type=str,
-                        help='filesystem which was tested')
-    parser.add_argument('expunge_dir', metavar='<directory with expunge files>', type=str,
-                        help='directory with expunge files')
+    parser = argparse.ArgumentParser(
+        description="Get list of expunge files not yet committed in git"
+    )
+    parser.add_argument(
+        "filesystem",
+        metavar="<filesystem name>",
+        type=str,
+        help="filesystem which was tested",
+    )
+    parser.add_argument(
+        "expunge_dir",
+        metavar="<directory with expunge files>",
+        type=str,
+        help="directory with expunge files",
+    )
     args = parser.parse_args()
 
     fs_expunge_dir = args.expunge_dir
@@ -41,5 +52,6 @@ def main():
                     short_file = f.split("../")[1]
                 sys.stdout.write("%s\n" % (short_file))
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/fstests/lib/git.py b/playbooks/python/workflows/fstests/lib/git.py
index 1884f565d613..fe8cf857da7e 100644
--- a/playbooks/python/workflows/fstests/lib/git.py
+++ b/playbooks/python/workflows/fstests/lib/git.py
@@ -2,25 +2,36 @@
 
 import subprocess, os
 
+
 class GitError(Exception):
     pass
+
+
 class ExecutionError(GitError):
     def __init__(self, errcode):
         self.error_code = errcode
+
+
 class TimeoutExpired(GitError):
     def __init__(self, errcode):
         self.error_code = errcode
         return "timeout"
 
+
 def _check(process):
     if process.returncode != 0:
         raise ExecutionError(process.returncode)
 
+
 def is_new_file(file):
-    cmd = ['git', 'status', '-s', file ]
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
-                               close_fds=True, universal_newlines=True)
+    cmd = ["git", "status", "-s", file]
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=1)
@@ -31,6 +42,6 @@ def is_new_file(file):
         process.wait()
         if process.returncode != 0:
             return False
-        if stdout.startswith('??'):
+        if stdout.startswith("??"):
             return True
         return False
diff --git a/playbooks/python/workflows/fstests/xunit_merge_all.py b/playbooks/python/workflows/fstests/xunit_merge_all.py
index 6af6e9817ab2..d2d2b64c100a 100755
--- a/playbooks/python/workflows/fstests/xunit_merge_all.py
+++ b/playbooks/python/workflows/fstests/xunit_merge_all.py
@@ -11,6 +11,7 @@ import os
 import sys
 from junitparser import JUnitXml, TestSuite
 
+
 def get_test_suite(filename):
     try:
         ts = JUnitXml.fromfile(filename)
@@ -18,21 +19,31 @@ def get_test_suite(filename):
         sys.exit("Couldn't open %s: %s" % (filename, e[1]))
 
     if type(ts) != TestSuite:
-        sys.exit('%s is not a xUnit report file' % filename)
+        sys.exit("%s is not a xUnit report file" % filename)
     return ts
 
+
 def merge_ts(old_ts, new_ts):
     for tc in new_ts:
         old_ts.add_testcase(tc)
     old_ts.update_statistics()
     return old_ts
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Merge all xunit files into one')
-    parser.add_argument('results', metavar='<directory with results>', type=str,
-                        help='directory with results file')
-    parser.add_argument('outputfile', metavar='<output file>', type=str,
-                        help='The file to generate output to')
+    parser = argparse.ArgumentParser(description="Merge all xunit files into one")
+    parser.add_argument(
+        "results",
+        metavar="<directory with results>",
+        type=str,
+        help="directory with results file",
+    )
+    parser.add_argument(
+        "outputfile",
+        metavar="<output file>",
+        type=str,
+        help="The file to generate output to",
+    )
     args = parser.parse_args()
 
     all_xunit_ts = None
@@ -46,7 +57,7 @@ def main():
                 continue
             if not os.path.isfile(f):
                 continue
-            if not f.endswith('.xml'):
+            if not f.endswith(".xml"):
                 continue
 
             sys.stdout.write("Processing %s ...\n" % f)
@@ -60,7 +71,11 @@ def main():
 
     if all_xunit_ts:
         all_xunit_ts.write(args.outputfile)
-        sys.stdout.write("%s generated by merging all the above %d xunit files successfully\n" % (args.outputfile, num))
+        sys.stdout.write(
+            "%s generated by merging all the above %d xunit files successfully\n"
+            % (args.outputfile, num)
+        )
+
 
-if __name__ == '__main__':
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/sysbench/sysbench-tps-compare.py b/playbooks/python/workflows/sysbench/sysbench-tps-compare.py
index 8faecdcffa87..dafeb119d857 100755
--- a/playbooks/python/workflows/sysbench/sysbench-tps-compare.py
+++ b/playbooks/python/workflows/sysbench/sysbench-tps-compare.py
@@ -8,18 +8,20 @@ import re
 import argparse
 from concurrent.futures import ThreadPoolExecutor
 
+
 # Function to parse a line and extract time and TPS
 def parse_line(line):
-    match = re.search(r'\[\s*(\d+)s\s*\].*?tps:\s*([\d.]+)', line)
+    match = re.search(r"\[\s*(\d+)s\s*\].*?tps:\s*([\d.]+)", line)
     if match:
         time_in_seconds = int(match.group(1))
         tps = float(match.group(2))
         return time_in_seconds, tps
     return None
 
+
 # Function to read and parse sysbench output file
 def read_sysbench_output(file_path):
-    with open(file_path, 'r') as file:
+    with open(file_path, "r") as file:
         lines = file.readlines()
 
     with ThreadPoolExecutor() as executor:
@@ -27,23 +29,58 @@ def read_sysbench_output(file_path):
 
     return [result for result in results if result is not None]
 
+
 # Function to list available matplotlib themes
 def list_themes():
     print("Available matplotlib themes:")
     for style in plt.style.available:
         print(style)
 
+
 # Main function
 def main():
-    parser = argparse.ArgumentParser(description='Compare sysbench outputs.')
-    parser.add_argument('file1', type=str, nargs='?', default='sysbench_output_doublewrite.txt', help='First sysbench output file')
-    parser.add_argument('file2', type=str, nargs='?', default='sysbench_output_nodoublewrite.txt', help='Second sysbench output file')
-    parser.add_argument('--legend1', type=str, default='innodb_doublewrite=ON', help='Legend for the first file')
-    parser.add_argument('--legend2', type=str, default='innodb_doublewrite=OFF', help='Legend for the second file')
-    parser.add_argument('--theme', type=str, default='dark_background', help='Matplotlib theme to use')
-    parser.add_argument('--list-themes', action='store_true', help='List available matplotlib themes')
-    parser.add_argument('--output', type=str, default='a_vs_b.png', help='Path of file to save')
-    parser.add_argument('--report-interval', type=int, default=1, help='Time interval in seconds for reporting')
+    parser = argparse.ArgumentParser(description="Compare sysbench outputs.")
+    parser.add_argument(
+        "file1",
+        type=str,
+        nargs="?",
+        default="sysbench_output_doublewrite.txt",
+        help="First sysbench output file",
+    )
+    parser.add_argument(
+        "file2",
+        type=str,
+        nargs="?",
+        default="sysbench_output_nodoublewrite.txt",
+        help="Second sysbench output file",
+    )
+    parser.add_argument(
+        "--legend1",
+        type=str,
+        default="innodb_doublewrite=ON",
+        help="Legend for the first file",
+    )
+    parser.add_argument(
+        "--legend2",
+        type=str,
+        default="innodb_doublewrite=OFF",
+        help="Legend for the second file",
+    )
+    parser.add_argument(
+        "--theme", type=str, default="dark_background", help="Matplotlib theme to use"
+    )
+    parser.add_argument(
+        "--list-themes", action="store_true", help="List available matplotlib themes"
+    )
+    parser.add_argument(
+        "--output", type=str, default="a_vs_b.png", help="Path of file to save"
+    )
+    parser.add_argument(
+        "--report-interval",
+        type=int,
+        default=1,
+        help="Time interval in seconds for reporting",
+    )
 
     args = parser.parse_args()
 
@@ -62,37 +99,40 @@ def main():
     tps_data_2 = [(time * args.report_interval, tps) for time, tps in tps_data_2]
 
     # Determine the maximum time value to decide if we need to use hours or seconds
-    max_time_in_seconds = max(max(tps_data_1, key=lambda x: x[0])[0], max(tps_data_2, key=lambda x: x[0])[0])
+    max_time_in_seconds = max(
+        max(tps_data_1, key=lambda x: x[0])[0], max(tps_data_2, key=lambda x: x[0])[0]
+    )
     use_hours = max_time_in_seconds > 2 * 3600
 
     # Convert times if necessary
     if use_hours:
         tps_data_1 = [(time / 3600, tps) for time, tps in tps_data_1]
         tps_data_2 = [(time / 3600, tps) for time, tps in tps_data_2]
-        time_label = 'Time (hours)'
+        time_label = "Time (hours)"
     else:
-        time_label = 'Time (seconds)'
+        time_label = "Time (seconds)"
 
     # Create pandas DataFrames
-    df1 = pd.DataFrame(tps_data_1, columns=[time_label, 'TPS'])
-    df2 = pd.DataFrame(tps_data_2, columns=[time_label, 'TPS'])
+    df1 = pd.DataFrame(tps_data_1, columns=[time_label, "TPS"])
+    df2 = pd.DataFrame(tps_data_2, columns=[time_label, "TPS"])
 
     # Plot the TPS values
     plt.figure(figsize=(30, 12))
 
-    plt.plot(df1[time_label], df1['TPS'], 'ro', markersize=2, label=args.legend1)
-    plt.plot(df2[time_label], df2['TPS'], 'go', markersize=2, label=args.legend2)
+    plt.plot(df1[time_label], df1["TPS"], "ro", markersize=2, label=args.legend1)
+    plt.plot(df2[time_label], df2["TPS"], "go", markersize=2, label=args.legend2)
 
-    plt.title('Transactions Per Second (TPS) Over Time')
+    plt.title("Transactions Per Second (TPS) Over Time")
     plt.xlabel(time_label)
-    plt.ylabel('TPS')
+    plt.ylabel("TPS")
     plt.grid(True)
     # Try plotting without this to zoom in
     plt.ylim(0)
     plt.legend()
     plt.tight_layout()
     plt.savefig(args.output)
-    #plt.show()
+    # plt.show()
+
 
-if __name__ == '__main__':
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/sysbench/sysbench-tps-plot.py b/playbooks/python/workflows/sysbench/sysbench-tps-plot.py
index 5505ced88ce2..ee56b1f56e3e 100755
--- a/playbooks/python/workflows/sysbench/sysbench-tps-plot.py
+++ b/playbooks/python/workflows/sysbench/sysbench-tps-plot.py
@@ -9,27 +9,38 @@ import re
 import argparse
 from concurrent.futures import ThreadPoolExecutor
 
+
 # Function to parse a line and extract time and TPS from text
 def parse_line(line):
-    match = re.search(r'\[\s*(\d+)s\s*\].*?tps:\s*([\d.]+)', line)
+    match = re.search(r"\[\s*(\d+)s\s*\].*?tps:\s*([\d.]+)", line)
     if match:
         time_in_seconds = int(match.group(1))
         tps = float(match.group(2))
         return time_in_seconds, tps
     return None
 
+
 def main():
     # Setup argument parser
-    parser = argparse.ArgumentParser(description="Generate TPS plot from sysbench output")
-    parser.add_argument('input_file', type=str, help="Path to the input file (text or JSON)")
-    parser.add_argument('--output', type=str, default='tps_over_time.png', help="Output image file (default: tps_over_time.png)")
+    parser = argparse.ArgumentParser(
+        description="Generate TPS plot from sysbench output"
+    )
+    parser.add_argument(
+        "input_file", type=str, help="Path to the input file (text or JSON)"
+    )
+    parser.add_argument(
+        "--output",
+        type=str,
+        default="tps_over_time.png",
+        help="Output image file (default: tps_over_time.png)",
+    )
 
     # Parse arguments
     args = parser.parse_args()
 
     # Read the input file
     try:
-        with open(args.input_file, 'r') as file:
+        with open(args.input_file, "r") as file:
             # Read text lines and parse them concurrently
             lines = file.readlines()
             with ThreadPoolExecutor() as executor:
@@ -51,19 +62,19 @@ def main():
     # Convert times if necessary
     if use_hours:
         tps_data = [(time / 3600, tps) for time, tps in tps_data]
-        time_label = 'Time (hours)'
+        time_label = "Time (hours)"
     else:
-        time_label = 'Time (seconds)'
+        time_label = "Time (seconds)"
 
     # Create a pandas DataFrame
-    df = pd.DataFrame(tps_data, columns=[time_label, 'TPS'])
+    df = pd.DataFrame(tps_data, columns=[time_label, "TPS"])
 
     # Plot the TPS values
     plt.figure(figsize=(30, 12))
-    plt.plot(df[time_label], df['TPS'], 'o', markersize=2)
-    plt.title('Transactions Per Second (TPS) Over Time')
+    plt.plot(df[time_label], df["TPS"], "o", markersize=2)
+    plt.title("Transactions Per Second (TPS) Over Time")
     plt.xlabel(time_label)
-    plt.ylabel('TPS')
+    plt.ylabel("TPS")
     plt.grid(True)
     plt.ylim(0)
     plt.tight_layout()
@@ -72,5 +83,6 @@ def main():
     plt.savefig(args.output)
     print(f"TPS plot saved to {args.output}")
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/sysbench/sysbench-tps-variance.py b/playbooks/python/workflows/sysbench/sysbench-tps-variance.py
index 969714589b46..aee083b9e3cf 100755
--- a/playbooks/python/workflows/sysbench/sysbench-tps-variance.py
+++ b/playbooks/python/workflows/sysbench/sysbench-tps-variance.py
@@ -9,15 +9,17 @@ import seaborn as sns
 import argparse
 from scipy.stats import norm
 
+
 def extract_tps(filename):
     tps_values = []
-    with open(filename, 'r') as file:
+    with open(filename, "r") as file:
         for line in file:
-            match = re.search(r'tps: (\d+\.\d+)', line)
+            match = re.search(r"tps: (\d+\.\d+)", line)
             if match:
                 tps_values.append(float(match.group(1)))
     return tps_values
 
+
 def analyze_tps(tps_values):
     mean_tps = np.mean(tps_values)
     median_tps = np.median(tps_values)
@@ -25,28 +27,49 @@ def analyze_tps(tps_values):
     variance_tps = np.var(tps_values)
     return mean_tps, median_tps, std_tps, variance_tps
 
+
 def print_statistics(label, tps_values):
     mean_tps, median_tps, std_tps, variance_tps = analyze_tps(tps_values)
-    print(f'{label} Statistics:')
-    print(f'Mean TPS: {mean_tps:.2f}')
-    print(f'Median TPS: {median_tps:.2f}')
-    print(f'Standard Deviation of TPS: {std_tps:.2f}')
-    print(f'Variance of TPS: {variance_tps:.2f}\n')
+    print(f"{label} Statistics:")
+    print(f"Mean TPS: {mean_tps:.2f}")
+    print(f"Median TPS: {median_tps:.2f}")
+    print(f"Standard Deviation of TPS: {std_tps:.2f}")
+    print(f"Variance of TPS: {variance_tps:.2f}\n")
+
 
 def plot_histograms(tps_values1, tps_values2, legend1, legend2, color1, color2, outdir):
     plt.figure(figsize=(20, 12))
-    bins = np.linspace(min(min(tps_values1), min(tps_values2)), max(max(tps_values1), max(tps_values2)), 30)
-    plt.hist(tps_values1, bins=bins, alpha=0.5, color=color1, edgecolor='black', label=legend1)
+    bins = np.linspace(
+        min(min(tps_values1), min(tps_values2)),
+        max(max(tps_values1), max(tps_values2)),
+        30,
+    )
+    plt.hist(
+        tps_values1,
+        bins=bins,
+        alpha=0.5,
+        color=color1,
+        edgecolor="black",
+        label=legend1,
+    )
     if tps_values2:
-        plt.hist(tps_values2, bins=bins, alpha=0.5, color=color2, edgecolor='black', label=legend2)
-    plt.title('Distribution of TPS Values')
-    plt.xlabel('Transactions Per Second (TPS)')
-    plt.ylabel('Frequency')
-    plt.legend(loc='best')
+        plt.hist(
+            tps_values2,
+            bins=bins,
+            alpha=0.5,
+            color=color2,
+            edgecolor="black",
+            label=legend2,
+        )
+    plt.title("Distribution of TPS Values")
+    plt.xlabel("Transactions Per Second (TPS)")
+    plt.ylabel("Frequency")
+    plt.legend(loc="best")
     plt.grid(True)
-    plt.savefig(outdir + 'histogram.png')
+    plt.savefig(outdir + "histogram.png")
     plt.show()
 
+
 def plot_box_plots(tps_values1, tps_values2, legend1, legend2, color1, color2, outdir):
     data = []
     labels = []
@@ -58,112 +81,180 @@ def plot_box_plots(tps_values1, tps_values2, legend1, legend2, color1, color2, o
     plt.figure(figsize=(20, 12))
     box = plt.boxplot(data, labels=labels, patch_artist=True)
     colors = [color1, color2]
-    for patch, color in zip(box['boxes'], colors):
+    for patch, color in zip(box["boxes"], colors):
         patch.set_facecolor(color)
-    plt.title('Box Plot of TPS Values')
-    plt.ylabel('Transactions Per Second (TPS)')
+    plt.title("Box Plot of TPS Values")
+    plt.ylabel("Transactions Per Second (TPS)")
     plt.grid(True)
-    plt.savefig(outdir + 'box_plot.png')
+    plt.savefig(outdir + "box_plot.png")
     plt.show()
 
-def plot_density_plots(tps_values1, tps_values2, legend1, legend2, color1, color2, outdir):
+
+def plot_density_plots(
+    tps_values1, tps_values2, legend1, legend2, color1, color2, outdir
+):
     plt.figure(figsize=(20, 12))
     sns.kdeplot(tps_values1, fill=True, label=legend1, color=color1)
     if tps_values2:
         sns.kdeplot(tps_values2, fill=True, label=legend2, color=color2)
-    plt.title('Density Plot of TPS Values')
-    plt.xlabel('Transactions Per Second (TPS)')
-    plt.ylabel('Density')
-    plt.legend(loc='best')
+    plt.title("Density Plot of TPS Values")
+    plt.xlabel("Transactions Per Second (TPS)")
+    plt.ylabel("Density")
+    plt.legend(loc="best")
     plt.grid(True)
-    plt.savefig(outdir + 'density_plot.png')
+    plt.savefig(outdir + "density_plot.png")
     plt.show()
 
-def plot_combined_hist_density(tps_values1, tps_values2, legend1, legend2, color1, color2, outdir):
+
+def plot_combined_hist_density(
+    tps_values1, tps_values2, legend1, legend2, color1, color2, outdir
+):
     plt.figure(figsize=(20, 12))
-    bins = np.linspace(min(min(tps_values1), min(tps_values2)), max(max(tps_values1), max(tps_values2)), 30)
-    plt.hist(tps_values1, bins=bins, alpha=0.3, color=color1, edgecolor='black', label=f'Histogram {legend1}', density=True)
+    bins = np.linspace(
+        min(min(tps_values1), min(tps_values2)),
+        max(max(tps_values1), max(tps_values2)),
+        30,
+    )
+    plt.hist(
+        tps_values1,
+        bins=bins,
+        alpha=0.3,
+        color=color1,
+        edgecolor="black",
+        label=f"Histogram {legend1}",
+        density=True,
+    )
     if tps_values2:
-        plt.hist(tps_values2, bins=bins, alpha=0.3, color=color2, edgecolor='black', label=f'Histogram {legend2}', density=True)
-    sns.kdeplot(tps_values1, fill=False, label=f'Density {legend1}', color=color1)
+        plt.hist(
+            tps_values2,
+            bins=bins,
+            alpha=0.3,
+            color=color2,
+            edgecolor="black",
+            label=f"Histogram {legend2}",
+            density=True,
+        )
+    sns.kdeplot(tps_values1, fill=False, label=f"Density {legend1}", color=color1)
     if tps_values2:
-        sns.kdeplot(tps_values2, fill=False, label=f'Density {legend2}', color=color2)
+        sns.kdeplot(tps_values2, fill=False, label=f"Density {legend2}", color=color2)
 
     mean1, std1 = np.mean(tps_values1), np.std(tps_values1)
     ax2 = plt.gca().twinx()
-    ax2.set_ylabel('Density')
-    ax2.axvline(mean1, color=color1, linestyle='dotted', linewidth=2)
-    ax2.axvline(mean1 - std1, color=color1, linestyle='dotted', linewidth=1)
-    ax2.axvline(mean1 + std1, color=color1, linestyle='dotted', linewidth=1)
+    ax2.set_ylabel("Density")
+    ax2.axvline(mean1, color=color1, linestyle="dotted", linewidth=2)
+    ax2.axvline(mean1 - std1, color=color1, linestyle="dotted", linewidth=1)
+    ax2.axvline(mean1 + std1, color=color1, linestyle="dotted", linewidth=1)
     if tps_values2:
         mean2, std2 = np.mean(tps_values2), np.std(tps_values2)
-        ax2.axvline(mean2, color=color2, linestyle='dotted', linewidth=2)
-        ax2.axvline(mean2 - std2, color=color2, linestyle='dotted', linewidth=1)
-        ax2.axvline(mean2 + std2, color=color2, linestyle='dotted', linewidth=1)
-
-    plt.title('Combined Histogram and Density Plot of TPS Values')
-    plt.xlabel('Transactions Per Second (TPS)')
-    plt.ylabel('Frequency/Density')
-    plt.legend(loc='best')
+        ax2.axvline(mean2, color=color2, linestyle="dotted", linewidth=2)
+        ax2.axvline(mean2 - std2, color=color2, linestyle="dotted", linewidth=1)
+        ax2.axvline(mean2 + std2, color=color2, linestyle="dotted", linewidth=1)
+
+    plt.title("Combined Histogram and Density Plot of TPS Values")
+    plt.xlabel("Transactions Per Second (TPS)")
+    plt.ylabel("Frequency/Density")
+    plt.legend(loc="best")
     plt.grid(True)
-    plt.savefig(outdir + 'combined_hist_density.png')
+    plt.savefig(outdir + "combined_hist_density.png")
     plt.show()
 
+
 def plot_bell_curve(tps_values1, tps_values2, legend1, legend2, color1, color2, outdir):
     plt.figure(figsize=(20, 12))
     mean1, std1 = np.mean(tps_values1), np.std(tps_values1)
-    x1 = np.linspace(mean1 - 3*std1, mean1 + 3*std1, 100)
-    plt.plot(x1, norm.pdf(x1, mean1, std1) * 100, label=f'Bell Curve {legend1}', color=color1)  # Multiplying by 100 for percentage
+    x1 = np.linspace(mean1 - 3 * std1, mean1 + 3 * std1, 100)
+    plt.plot(
+        x1, norm.pdf(x1, mean1, std1) * 100, label=f"Bell Curve {legend1}", color=color1
+    )  # Multiplying by 100 for percentage
 
     if tps_values2:
         mean2, std2 = np.mean(tps_values2), np.std(tps_values2)
-        x2 = np.linspace(mean2 - 3*std2, mean2 + 3*std2, 100)
-        plt.plot(x2, norm.pdf(x2, mean2, std2) * 100, label=f'Bell Curve {legend2}', color=color2)  # Multiplying by 100 for percentage
-
-    plt.title('Bell Curve (Normal Distribution) of TPS Values')
-    plt.xlabel('Transactions Per Second (TPS)')
-    plt.ylabel('Probability Density (%)')
-    plt.legend(loc='best')
+        x2 = np.linspace(mean2 - 3 * std2, mean2 + 3 * std2, 100)
+        plt.plot(
+            x2,
+            norm.pdf(x2, mean2, std2) * 100,
+            label=f"Bell Curve {legend2}",
+            color=color2,
+        )  # Multiplying by 100 for percentage
+
+    plt.title("Bell Curve (Normal Distribution) of TPS Values")
+    plt.xlabel("Transactions Per Second (TPS)")
+    plt.ylabel("Probability Density (%)")
+    plt.legend(loc="best")
     plt.grid(True)
-    plt.savefig(outdir + 'bell_curve.png')
+    plt.savefig(outdir + "bell_curve.png")
     plt.show()
 
-def plot_combined_hist_bell_curve(tps_values1, tps_values2, legend1, legend2, color1, color2, outdir):
+
+def plot_combined_hist_bell_curve(
+    tps_values1, tps_values2, legend1, legend2, color1, color2, outdir
+):
     fig, ax1 = plt.subplots(figsize=(20, 12))
 
-    bins = np.linspace(min(min(tps_values1), min(tps_values2)), max(max(tps_values1), max(tps_values2)), 30)
-    ax1.hist(tps_values1, bins=bins, alpha=0.5, color=color1, edgecolor='black', label=legend1)
+    bins = np.linspace(
+        min(min(tps_values1), min(tps_values2)),
+        max(max(tps_values1), max(tps_values2)),
+        30,
+    )
+    ax1.hist(
+        tps_values1,
+        bins=bins,
+        alpha=0.5,
+        color=color1,
+        edgecolor="black",
+        label=legend1,
+    )
     if tps_values2:
-        ax1.hist(tps_values2, bins=bins, alpha=0.5, color=color2, edgecolor='black', label=legend2)
-
-    ax1.set_xlabel('Transactions Per Second (TPS)')
-    ax1.set_ylabel('Frequency')
-    ax1.legend(loc='upper left')
+        ax1.hist(
+            tps_values2,
+            bins=bins,
+            alpha=0.5,
+            color=color2,
+            edgecolor="black",
+            label=legend2,
+        )
+
+    ax1.set_xlabel("Transactions Per Second (TPS)")
+    ax1.set_ylabel("Frequency")
+    ax1.legend(loc="upper left")
     ax1.grid(True)
 
     ax2 = ax1.twinx()
     mean1, std1 = np.mean(tps_values1), np.std(tps_values1)
-    x1 = np.linspace(mean1 - 3*std1, mean1 + 3*std1, 100)
-    ax2.plot(x1, norm.pdf(x1, mean1, std1) * 100, label=f'Bell Curve {legend1}', color=color1, linestyle='dashed')
-    ax2.axvline(mean1, color=color1, linestyle='dotted', linewidth=2)
-    ax2.axvline(mean1 - std1, color=color1, linestyle='dotted', linewidth=1)
-    ax2.axvline(mean1 + std1, color=color1, linestyle='dotted', linewidth=1)
+    x1 = np.linspace(mean1 - 3 * std1, mean1 + 3 * std1, 100)
+    ax2.plot(
+        x1,
+        norm.pdf(x1, mean1, std1) * 100,
+        label=f"Bell Curve {legend1}",
+        color=color1,
+        linestyle="dashed",
+    )
+    ax2.axvline(mean1, color=color1, linestyle="dotted", linewidth=2)
+    ax2.axvline(mean1 - std1, color=color1, linestyle="dotted", linewidth=1)
+    ax2.axvline(mean1 + std1, color=color1, linestyle="dotted", linewidth=1)
 
     if tps_values2:
         mean2, std2 = np.mean(tps_values2), np.std(tps_values2)
-        x2 = np.linspace(mean2 - 3*std2, mean2 + 3*std2, 100)
-        ax2.plot(x2, norm.pdf(x2, mean2, std2) * 100, label=f'Bell Curve {legend2}', color=color2, linestyle='dashed')
-        ax2.axvline(mean2, color=color2, linestyle='dotted', linewidth=2)
-        ax2.axvline(mean2 - std2, color=color2, linestyle='dotted', linewidth=1)
-        ax2.axvline(mean2 + std2, color=color2, linestyle='dotted', linewidth=1)
-
-    ax2.set_ylabel('Probability Density (%)')
-    ax2.legend(loc='upper center')
-
-    plt.title('Combined Histogram and Bell Curve of TPS Values')
-    plt.savefig(outdir + 'combined_hist_bell_curve.png')
+        x2 = np.linspace(mean2 - 3 * std2, mean2 + 3 * std2, 100)
+        ax2.plot(
+            x2,
+            norm.pdf(x2, mean2, std2) * 100,
+            label=f"Bell Curve {legend2}",
+            color=color2,
+            linestyle="dashed",
+        )
+        ax2.axvline(mean2, color=color2, linestyle="dotted", linewidth=2)
+        ax2.axvline(mean2 - std2, color=color2, linestyle="dotted", linewidth=1)
+        ax2.axvline(mean2 + std2, color=color2, linestyle="dotted", linewidth=1)
+
+    ax2.set_ylabel("Probability Density (%)")
+    ax2.legend(loc="upper center")
+
+    plt.title("Combined Histogram and Bell Curve of TPS Values")
+    plt.savefig(outdir + "combined_hist_bell_curve.png")
     plt.show()
 
+
 def plot_variance_bars(variance1, variance2, legend1, legend2, color1, color2, outdir):
     fig, ax1 = plt.subplots(figsize=(20, 12))
 
@@ -173,24 +264,39 @@ def plot_variance_bars(variance1, variance2, legend1, legend2, color1, color2, o
 
     bars = plt.bar(labels, variances, color=colors)
     for bar, variance in zip(bars, variances):
-        plt.text(bar.get_x() + bar.get_width() / 2, bar.get_height(), f'{variance:.2f}', ha='center', va='bottom')
+        plt.text(
+            bar.get_x() + bar.get_width() / 2,
+            bar.get_height(),
+            f"{variance:.2f}",
+            ha="center",
+            va="bottom",
+        )
 
     # Calculate the factor by which the larger variance is greater than the smaller variance
     if variance1 != 0 and variance2 != 0:
         factor = max(variance1, variance2) / min(variance1, variance2)
-        factor_text = f'Variance Factor: {factor:.2f}'
-        plt.text(1, max(variances) * 1.05, factor_text, ha='center', va='bottom', fontsize=12, color='white')
-
-    plt.title('Variance of TPS Values')
-    plt.ylabel('Variance')
+        factor_text = f"Variance Factor: {factor:.2f}"
+        plt.text(
+            1,
+            max(variances) * 1.05,
+            factor_text,
+            ha="center",
+            va="bottom",
+            fontsize=12,
+            color="white",
+        )
+
+    plt.title("Variance of TPS Values")
+    plt.ylabel("Variance")
 
     # Add lollipop marker
     for bar, variance in zip(bars, variances):
-        plt.plot(bar.get_x() + bar.get_width() / 2, variance, 'o', color='black')
+        plt.plot(bar.get_x() + bar.get_width() / 2, variance, "o", color="black")
 
-    plt.savefig(outdir + 'variance_bar.png')
+    plt.savefig(outdir + "variance_bar.png")
     plt.show()
 
+
 def plot_outliers(tps_values1, tps_values2, legend1, legend2, color1, color2, outdir):
     data = [tps_values1]
     labels = [legend1]
@@ -202,41 +308,69 @@ def plot_outliers(tps_values1, tps_values2, legend1, legend2, color1, color2, ou
         colors.append(color2)
 
     fig, ax = plt.subplots(figsize=(20, 12))
-    box = ax.boxplot(data, labels=labels, patch_artist=True, showfliers=True,
-                     whiskerprops=dict(color='white', linewidth=2),
-                     capprops=dict(color='white', linewidth=2),
-                     medianprops=dict(color='yellow', linewidth=2))
+    box = ax.boxplot(
+        data,
+        labels=labels,
+        patch_artist=True,
+        showfliers=True,
+        whiskerprops=dict(color="white", linewidth=2),
+        capprops=dict(color="white", linewidth=2),
+        medianprops=dict(color="yellow", linewidth=2),
+    )
 
     # Color the boxes
-    for patch, color in zip(box['boxes'], colors):
+    for patch, color in zip(box["boxes"], colors):
         patch.set_facecolor(color)
 
     # Scatter plot for the actual points
     for i, (d, color) in enumerate(zip(data, colors)):
         y = d
         # Adding jitter to the x-axis for better visibility
-        x = np.random.normal(i + 1, 0.04, size=len(y))  # Adding some jitter to the x-axis
-        ax.scatter(x, y, alpha=0.6, color=color, edgecolor='black')
+        x = np.random.normal(
+            i + 1, 0.04, size=len(y)
+        )  # Adding some jitter to the x-axis
+        ax.scatter(x, y, alpha=0.6, color=color, edgecolor="black")
 
-    plt.title('Outliers in TPS Values')
-    plt.ylabel('Transactions Per Second (TPS)')
+    plt.title("Outliers in TPS Values")
+    plt.ylabel("Transactions Per Second (TPS)")
     plt.grid(True)
-    plt.savefig(outdir + 'outliers_plot.png')
+    plt.savefig(outdir + "outliers_plot.png")
     plt.show()
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Analyze and compare TPS values from sysbench output files.')
-    parser.add_argument('file1', help='First TPS file')
-    parser.add_argument('--legend1', type=str, default='innodb_doublewrite=ON', help='Legend for the first file')
-    parser.add_argument('file2', nargs='?', default=None, help='Second TPS file (optional)')
-    parser.add_argument('--legend2', type=str, default='innodb_doublewrite=OFF', help='Legend for the second file')
-    parser.add_argument('--dir', type=str, default='./', help='Path to place images')
-    parser.add_argument('--color1', default='cyan', help='Color for the first dataset (default: cyan)')
-    parser.add_argument('--color2', default='orange', help='Color for the second dataset (default: orange)')
+    parser = argparse.ArgumentParser(
+        description="Analyze and compare TPS values from sysbench output files."
+    )
+    parser.add_argument("file1", help="First TPS file")
+    parser.add_argument(
+        "--legend1",
+        type=str,
+        default="innodb_doublewrite=ON",
+        help="Legend for the first file",
+    )
+    parser.add_argument(
+        "file2", nargs="?", default=None, help="Second TPS file (optional)"
+    )
+    parser.add_argument(
+        "--legend2",
+        type=str,
+        default="innodb_doublewrite=OFF",
+        help="Legend for the second file",
+    )
+    parser.add_argument("--dir", type=str, default="./", help="Path to place images")
+    parser.add_argument(
+        "--color1", default="cyan", help="Color for the first dataset (default: cyan)"
+    )
+    parser.add_argument(
+        "--color2",
+        default="orange",
+        help="Color for the second dataset (default: orange)",
+    )
 
     args = parser.parse_args()
 
-    plt.style.use('dark_background')  # Set the dark theme
+    plt.style.use("dark_background")  # Set the dark theme
 
     tps_values1 = extract_tps(args.file1)
     tps_values2 = extract_tps(args.file2) if args.file2 else None
@@ -245,22 +379,89 @@ def main():
     if tps_values2:
         print_statistics(args.legend2, tps_values2)
 
-    plot_histograms(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2, args.dir)
-    plot_box_plots(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2, args.dir)
-    plot_density_plots(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2, args.dir)
-    plot_combined_hist_density(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2, args.dir)
-    plot_bell_curve(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2, args.dir)
-    plot_combined_hist_bell_curve(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2, args.dir)
+    plot_histograms(
+        tps_values1,
+        tps_values2,
+        args.legend1,
+        args.legend2 if args.legend2 else "",
+        args.color1,
+        args.color2,
+        args.dir,
+    )
+    plot_box_plots(
+        tps_values1,
+        tps_values2,
+        args.legend1,
+        args.legend2 if args.legend2 else "",
+        args.color1,
+        args.color2,
+        args.dir,
+    )
+    plot_density_plots(
+        tps_values1,
+        tps_values2,
+        args.legend1,
+        args.legend2 if args.legend2 else "",
+        args.color1,
+        args.color2,
+        args.dir,
+    )
+    plot_combined_hist_density(
+        tps_values1,
+        tps_values2,
+        args.legend1,
+        args.legend2 if args.legend2 else "",
+        args.color1,
+        args.color2,
+        args.dir,
+    )
+    plot_bell_curve(
+        tps_values1,
+        tps_values2,
+        args.legend1,
+        args.legend2 if args.legend2 else "",
+        args.color1,
+        args.color2,
+        args.dir,
+    )
+    plot_combined_hist_bell_curve(
+        tps_values1,
+        tps_values2,
+        args.legend1,
+        args.legend2 if args.legend2 else "",
+        args.color1,
+        args.color2,
+        args.dir,
+    )
 
     # Plot variance bars
     _, _, _, variance1 = analyze_tps(tps_values1)
     if tps_values2:
         _, _, _, variance2 = analyze_tps(tps_values2)
-        plot_variance_bars(variance1, variance2, args.legend1, args.legend2, args.color1, args.color2, args.dir)
+        plot_variance_bars(
+            variance1,
+            variance2,
+            args.legend1,
+            args.legend2,
+            args.color1,
+            args.color2,
+            args.dir,
+        )
     else:
-        plot_variance_bars(variance1, 0, args.legend1, '', args.color1, 'black', args.dir)  # Use black for the second bar if there's only one dataset
-
-    plot_outliers(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2, args.dir)
-
-if __name__ == '__main__':
+        plot_variance_bars(
+            variance1, 0, args.legend1, "", args.color1, "black", args.dir
+        )  # Use black for the second bar if there's only one dataset
+
+    plot_outliers(
+        tps_values1,
+        tps_values2,
+        args.legend1,
+        args.legend2 if args.legend2 else "",
+        args.color1,
+        args.color2,
+        args.dir,
+    )
+
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/roles/gen_nodes/python/gen_pcie_passthrough_guestfs_xml.py b/playbooks/roles/gen_nodes/python/gen_pcie_passthrough_guestfs_xml.py
index a5abdf0f7045..076fe5833416 100755
--- a/playbooks/roles/gen_nodes/python/gen_pcie_passthrough_guestfs_xml.py
+++ b/playbooks/roles/gen_nodes/python/gen_pcie_passthrough_guestfs_xml.py
@@ -24,11 +24,12 @@ pcie_hotplug_template = """<!-- PCIE passthrough device -->
 <!-- End of PCIE passthrough device -->
 """
 
+
 def main():
-    topdir = os.environ.get('TOPDIR', '.')
+    topdir = os.environ.get("TOPDIR", ".")
 
     # load extra_vars
-    with open(f'{topdir}/extra_vars.yaml') as stream:
+    with open(f"{topdir}/extra_vars.yaml") as stream:
         extra_vars = yaml.safe_load(stream)
 
     yaml_nodes_file = f'{topdir}/{extra_vars["kdevops_nodes"]}'
@@ -38,36 +39,48 @@ def main():
         nodes = yaml.safe_load(stream)
 
     # add pcie devices
-    for node in nodes['guestfs_nodes']:
-        name = node['name']
-        pcipassthrough = node.get('pcipassthrough')
+    for node in nodes["guestfs_nodes"]:
+        name = node["name"]
+        pcipassthrough = node.get("pcipassthrough")
         if not pcipassthrough:
             continue
         for dev_key_name in pcipassthrough:
             dev = pcipassthrough.get(dev_key_name)
             dev_keys = list(dev.keys())
-            if 'domain' not in dev_keys or 'bus' not in dev_keys or 'slot' not in dev_keys or 'function' not in dev_keys:
-                raise Exception(f"Missing pcie attributes for device %s in %s" %
-                                (dev_key_name, yaml_nodes_file))
-            domain = hex(dev.get('domain'))
-            bus = hex(dev.get('bus'))
-            slot = hex(dev.get('slot'))
-            function = hex(dev.get('function'))
+            if (
+                "domain" not in dev_keys
+                or "bus" not in dev_keys
+                or "slot" not in dev_keys
+                or "function" not in dev_keys
+            ):
+                raise Exception(
+                    f"Missing pcie attributes for device %s in %s"
+                    % (dev_key_name, yaml_nodes_file)
+                )
+            domain = hex(dev.get("domain"))
+            bus = hex(dev.get("bus"))
+            slot = hex(dev.get("slot"))
+            function = hex(dev.get("function"))
 
-            pcie_xml = f"{extra_vars['guestfs_path']}/{name}/pcie_passthrough_" + dev_key_name + ".xml"
+            pcie_xml = (
+                f"{extra_vars['guestfs_path']}/{name}/pcie_passthrough_"
+                + dev_key_name
+                + ".xml"
+            )
 
             if os.path.exists(pcie_xml):
                 os.remove(pcie_xml)
 
-            device_xml = open(pcie_xml, 'w')
+            device_xml = open(pcie_xml, "w")
             context = {
-                "domain" : domain,
-                "bus" : bus,
-                "slot" : slot,
-                "function" : function,
+                "domain": domain,
+                "bus": bus,
+                "slot": slot,
+                "function": function,
             }
             device_xml.write(pcie_hotplug_template.format(**context))
             device_xml.close()
 
+
 if __name__ == "__main__":
     main()
diff --git a/playbooks/roles/linux-mirror/python/gen-mirror-files.py b/playbooks/roles/linux-mirror/python/gen-mirror-files.py
index 65fc909dd608..c43b34a80cdc 100755
--- a/playbooks/roles/linux-mirror/python/gen-mirror-files.py
+++ b/playbooks/roles/linux-mirror/python/gen-mirror-files.py
@@ -13,9 +13,9 @@ import time
 import os
 from pathlib import Path
 
-topdir = os.environ.get('TOPDIR', '.')
+topdir = os.environ.get("TOPDIR", ".")
 yaml_dir = topdir + "/playbooks/roles/linux-mirror/linux-mirror-systemd/"
-default_mirrors_yaml = yaml_dir + 'mirrors.yaml'
+default_mirrors_yaml = yaml_dir + "mirrors.yaml"
 
 service_template = """[Unit]
 Description={short_name} mirror [{target}]
@@ -44,19 +44,37 @@ OnUnitInactiveSec={refresh}
 WantedBy=default.target
 """
 
+
 def main():
-    parser = argparse.ArgumentParser(description='gen-mirror-files')
-    parser.add_argument('--yaml-mirror', metavar='<yaml_mirror>', type=str,
-                        default=default_mirrors_yaml,
-                        help='The yaml mirror input file.')
-    parser.add_argument('--verbose', const=True, default=False, action="store_const",
-                        help='Be verbose on otput.')
-    parser.add_argument('--refresh', metavar='<refresh>', type=str,
-                        default='360m',
-                        help='How often to update the git tree.')
-    parser.add_argument('--refresh-on-boot', metavar='<refresh>', type=str,
-                        default='10m',
-                        help='How long to wait on boot to update the git tree.')
+    parser = argparse.ArgumentParser(description="gen-mirror-files")
+    parser.add_argument(
+        "--yaml-mirror",
+        metavar="<yaml_mirror>",
+        type=str,
+        default=default_mirrors_yaml,
+        help="The yaml mirror input file.",
+    )
+    parser.add_argument(
+        "--verbose",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Be verbose on otput.",
+    )
+    parser.add_argument(
+        "--refresh",
+        metavar="<refresh>",
+        type=str,
+        default="360m",
+        help="How often to update the git tree.",
+    )
+    parser.add_argument(
+        "--refresh-on-boot",
+        metavar="<refresh>",
+        type=str,
+        default="10m",
+        help="How long to wait on boot to update the git tree.",
+    )
     args = parser.parse_args()
 
     if not os.path.isfile(args.yaml_mirror):
@@ -64,86 +82,95 @@ def main():
         sys.exit(1)
 
     # load the yaml input file
-    with open(f'{args.yaml_mirror}') as stream:
+    with open(f"{args.yaml_mirror}") as stream:
         yaml_vars = yaml.safe_load(stream)
 
-    if yaml_vars.get('mirrors') is None:
-        raise Exception(f"Missing mirrors descriptions on %s" %
-                        (args.yaml_mirror))
+    if yaml_vars.get("mirrors") is None:
+        raise Exception(f"Missing mirrors descriptions on %s" % (args.yaml_mirror))
 
-    if (args.verbose):
+    if args.verbose:
         sys.stdout.write("Yaml mirror input: %s\n\n" % args.yaml_mirror)
 
     total = 0
-    for mirror in yaml_vars['mirrors']:
+    for mirror in yaml_vars["mirrors"]:
         total = total + 1
 
-        if mirror.get('short_name') is None:
-            raise Exception(f"Missing required short_name on mirror item #%d on file: %s" % (total, args.yaml_mirror))
-        if mirror.get('url') is None:
-            raise Exception(f"Missing required url on mirror item #%d on file: %s" % (total, args.yaml_mirror))
-        if mirror.get('target') is None:
-            raise Exception(f"Missing required target on mirror item #%d on file: %s" % (total, args.yaml_mirror))
-
-        short_name = mirror['short_name'].replace("/", "-")
-        url = mirror['url']
-        target = mirror['target']
+        if mirror.get("short_name") is None:
+            raise Exception(
+                f"Missing required short_name on mirror item #%d on file: %s"
+                % (total, args.yaml_mirror)
+            )
+        if mirror.get("url") is None:
+            raise Exception(
+                f"Missing required url on mirror item #%d on file: %s"
+                % (total, args.yaml_mirror)
+            )
+        if mirror.get("target") is None:
+            raise Exception(
+                f"Missing required target on mirror item #%d on file: %s"
+                % (total, args.yaml_mirror)
+            )
+
+        short_name = mirror["short_name"].replace("/", "-")
+        url = mirror["url"]
+        target = mirror["target"]
 
         service_file = f"{yaml_dir}" + short_name + "-mirror" + ".service"
         timer_file = f"{yaml_dir}" + short_name + "-mirror" + ".timer"
 
         refresh = args.refresh
-        if mirror.get('refresh'):
-            refresh = mirror.get('refresh')
+        if mirror.get("refresh"):
+            refresh = mirror.get("refresh")
         refresh_on_boot = args.refresh_on_boot
-        if mirror.get('refresh_on_boot'):
-            refresh = mirror.get('refresh_on_boot')
+        if mirror.get("refresh_on_boot"):
+            refresh = mirror.get("refresh_on_boot")
 
-        if (args.verbose):
+        if args.verbose:
             sys.stdout.write("Mirror #%d\n" % total)
-            sys.stdout.write("\tshort_name: %s\n" % (mirror['short_name']))
-            sys.stdout.write("\turl: %s\n" % (mirror['short_name']))
-            sys.stdout.write("\ttarget: %s\n" % (mirror['short_name']))
+            sys.stdout.write("\tshort_name: %s\n" % (mirror["short_name"]))
+            sys.stdout.write("\turl: %s\n" % (mirror["short_name"]))
+            sys.stdout.write("\ttarget: %s\n" % (mirror["short_name"]))
             sys.stdout.write("\tservice: %s\n" % (service_file))
             sys.stdout.write("\ttimer: %s\n" % (timer_file))
             sys.stdout.write("\trefresh: %s\n" % (refresh))
             sys.stdout.write("\trefresh_on_boot: %s\n" % (refresh_on_boot))
 
         if os.path.exists(service_file):
-            if (args.verbose):
+            if args.verbose:
                 sys.stdout.write("\toverwrite_service: True\n")
             os.remove(service_file)
         else:
-            if (args.verbose):
+            if args.verbose:
                 sys.stdout.write("\toverwrite_service: False\n")
 
-        output_service = open(service_file, 'w')
+        output_service = open(service_file, "w")
         context = {
-            "short_name" : short_name,
-            "url" : url,
-            "target" : target,
+            "short_name": short_name,
+            "url": url,
+            "target": target,
         }
         output_service.write(service_template.format(**context))
         output_service.close()
 
         if os.path.exists(timer_file):
-            if (args.verbose):
+            if args.verbose:
                 sys.stdout.write("\toverwrite_timer: True\n")
             os.remove(timer_file)
         else:
-            if (args.verbose):
+            if args.verbose:
                 sys.stdout.write("\toverwrite_timer: False\n")
 
-        output_timer = open(timer_file, 'w')
+        output_timer = open(timer_file, "w")
         context = {
-            "short_name" : short_name,
-            "url" : url,
-            "target" : target,
-            "refresh" : refresh,
-            "refresh_on_boot" : refresh_on_boot,
+            "short_name": short_name,
+            "url": url,
+            "target": target,
+            "refresh": refresh,
+            "refresh_on_boot": refresh_on_boot,
         }
         output_timer.write(timer_template.format(**context))
         output_timer.close()
 
+
 if __name__ == "__main__":
     main()
diff --git a/playbooks/roles/linux-mirror/python/start-mirroring.py b/playbooks/roles/linux-mirror/python/start-mirroring.py
index 4e6b9ec2b9ea..03ede449fe2a 100755
--- a/playbooks/roles/linux-mirror/python/start-mirroring.py
+++ b/playbooks/roles/linux-mirror/python/start-mirroring.py
@@ -14,24 +14,25 @@ import os
 from pathlib import Path
 import subprocess
 
-topdir = os.environ.get('TOPDIR', '.')
+topdir = os.environ.get("TOPDIR", ".")
 yaml_dir = topdir + "/playbooks/roles/linux-mirror/linux-mirror-systemd/"
-default_mirrors_yaml = yaml_dir + 'mirrors.yaml'
+default_mirrors_yaml = yaml_dir + "mirrors.yaml"
+
+mirror_path = "/mirror/"
 
-mirror_path = '/mirror/'
 
 def mirror_entry(mirror, args):
-    short_name = mirror['short_name']
-    url = mirror['url']
-    target = mirror['target']
+    short_name = mirror["short_name"]
+    url = mirror["url"]
+    target = mirror["target"]
     reference = None
     reference_args = []
 
-    if mirror.get('reference'):
-        reference = mirror_path + mirror.get('reference')
-        reference_args = [ '--reference', reference ]
+    if mirror.get("reference"):
+        reference = mirror_path + mirror.get("reference")
+        reference_args = ["--reference", reference]
 
-    if (args.verbose):
+    if args.verbose:
         sys.stdout.write("\tshort_name: %s\n" % (short_name))
         sys.stdout.write("\turl: %s\n" % (url))
         sys.stdout.write("\ttarget: %s\n" % (url))
@@ -40,28 +41,31 @@ def mirror_entry(mirror, args):
         else:
             sys.stdout.write("\treference: %s\n" % (reference))
     cmd = [
-           'git',
-           '-C',
-           mirror_path,
-           'clone',
-           '--verbose',
-           '--progress',
-           '--mirror',
-           url,
-           target ]
+        "git",
+        "-C",
+        mirror_path,
+        "clone",
+        "--verbose",
+        "--progress",
+        "--mirror",
+        url,
+        target,
+    ]
     cmd = cmd + reference_args
     mirror_target = mirror_path + target
     if os.path.isdir(mirror_target):
         return
     sys.stdout.write("Mirroring: %s onto %s\n" % (short_name, mirror_target))
-    if (args.verbose):
+    if args.verbose:
         sys.stdout.write("%s\n" % (cmd))
         sys.stdout.write("%s\n" % (" ".join(cmd)))
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE,
-                               stderr=subprocess.STDOUT,
-                               close_fds=True,
-                               universal_newlines=True)
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     try:
         data = process.communicate(timeout=12000)
     except subprocess.TimeoutExpired:
@@ -73,12 +77,21 @@ def mirror_entry(mirror, args):
 
 
 def main():
-    parser = argparse.ArgumentParser(description='start-mirroring')
-    parser.add_argument('--yaml-mirror', metavar='<yaml_mirror>', type=str,
-                        default=default_mirrors_yaml,
-                        help='The yaml mirror input file.')
-    parser.add_argument('--verbose', const=True, default=False, action="store_const",
-                        help='Be verbose on otput.')
+    parser = argparse.ArgumentParser(description="start-mirroring")
+    parser.add_argument(
+        "--yaml-mirror",
+        metavar="<yaml_mirror>",
+        type=str,
+        default=default_mirrors_yaml,
+        help="The yaml mirror input file.",
+    )
+    parser.add_argument(
+        "--verbose",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Be verbose on otput.",
+    )
     args = parser.parse_args()
 
     if not os.path.isfile(args.yaml_mirror):
@@ -86,14 +99,13 @@ def main():
         sys.exit(1)
 
     # load the yaml input file
-    with open(f'{args.yaml_mirror}') as stream:
+    with open(f"{args.yaml_mirror}") as stream:
         yaml_vars = yaml.safe_load(stream)
 
-    if yaml_vars.get('mirrors') is None:
-        raise Exception(f"Missing mirrors descriptions on %s" %
-                        (args.yaml_mirror))
+    if yaml_vars.get("mirrors") is None:
+        raise Exception(f"Missing mirrors descriptions on %s" % (args.yaml_mirror))
 
-    if (args.verbose):
+    if args.verbose:
         sys.stdout.write("Yaml mirror input: %s\n\n" % args.yaml_mirror)
 
     # We do 3 passes, first to check the file has all requirements
@@ -103,25 +115,35 @@ def main():
     # The second pass is for mirrors which do not have a reference, the
     # third and final pass is for mirrors which do have a reference.
     total = 0
-    for mirror in yaml_vars['mirrors']:
+    for mirror in yaml_vars["mirrors"]:
         total = total + 1
 
-        if mirror.get('short_name') is None:
-            raise Exception(f"Missing required short_name on mirror item #%d on file: %s" % (total, args.yaml_mirror))
-        if mirror.get('url') is None:
-            raise Exception(f"Missing required url on mirror item #%d on file: %s" % (total, args.yaml_mirror))
-        if mirror.get('target') is None:
-            raise Exception(f"Missing required target for mirror %s on yaml file %s on item #%d" % (mirror.get('short_name'), args.yaml_mirror, total))
+        if mirror.get("short_name") is None:
+            raise Exception(
+                f"Missing required short_name on mirror item #%d on file: %s"
+                % (total, args.yaml_mirror)
+            )
+        if mirror.get("url") is None:
+            raise Exception(
+                f"Missing required url on mirror item #%d on file: %s"
+                % (total, args.yaml_mirror)
+            )
+        if mirror.get("target") is None:
+            raise Exception(
+                f"Missing required target for mirror %s on yaml file %s on item #%d"
+                % (mirror.get("short_name"), args.yaml_mirror, total)
+            )
 
     # Mirror trees without a reference first
-    for mirror in yaml_vars['mirrors']:
-        if not mirror.get('reference'):
+    for mirror in yaml_vars["mirrors"]:
+        if not mirror.get("reference"):
             mirror_entry(mirror, args)
 
     # Mirror trees which need a reference last
-    for mirror in yaml_vars['mirrors']:
-        if mirror.get('reference'):
+    for mirror in yaml_vars["mirrors"]:
+        if mirror.get("reference"):
             mirror_entry(mirror, args)
 
+
 if __name__ == "__main__":
     main()
diff --git a/scripts/check_commit_format.py b/scripts/check_commit_format.py
index f72f9d1a0352..3ac37944418c 100755
--- a/scripts/check_commit_format.py
+++ b/scripts/check_commit_format.py
@@ -11,11 +11,16 @@ import subprocess
 import sys
 import re
 
+
 def get_latest_commit_message():
     """Get the latest commit message"""
     try:
-        result = subprocess.run(['git', 'log', '-1', '--pretty=format:%B'],
-                              capture_output=True, text=True, check=True)
+        result = subprocess.run(
+            ["git", "log", "-1", "--pretty=format:%B"],
+            capture_output=True,
+            text=True,
+            check=True,
+        )
         return result.stdout
     except subprocess.CalledProcessError:
         print("Error: Failed to get commit message")
@@ -24,30 +29,35 @@ def get_latest_commit_message():
         print("Error: git command not found")
         return None
 
+
 def check_commit_format(commit_msg):
     """Check commit message formatting"""
     issues = []
     if not commit_msg:
         return ["No commit message found"]
-    lines = commit_msg.strip().split('\n')
+    lines = commit_msg.strip().split("\n")
     # Find Generated-by line
     generated_by_idx = None
     signed_off_by_idx = None
     for i, line in enumerate(lines):
-        if line.startswith('Generated-by: Claude AI'):
+        if line.startswith("Generated-by: Claude AI"):
             generated_by_idx = i
-        elif line.startswith('Signed-off-by:'):
+        elif line.startswith("Signed-off-by:"):
             signed_off_by_idx = i
     # If Generated-by is present, check formatting
     if generated_by_idx is not None:
         if signed_off_by_idx is None:
-            issues.append("Generated-by: Claude AI found but no Signed-off-by tag present")
+            issues.append(
+                "Generated-by: Claude AI found but no Signed-off-by tag present"
+            )
         else:
             # Check if Generated-by is immediately followed by Signed-off-by (no lines in between)
             if signed_off_by_idx != generated_by_idx + 1:
                 lines_between = signed_off_by_idx - generated_by_idx - 1
                 if lines_between > 0:
-                    issues.append(f"Generated-by: Claude AI must be immediately followed by Signed-off-by (found {lines_between} lines between them)")
+                    issues.append(
+                        f"Generated-by: Claude AI must be immediately followed by Signed-off-by (found {lines_between} lines between them)"
+                    )
                     for i in range(generated_by_idx + 1, signed_off_by_idx):
                         if lines[i].strip():
                             issues.append(f"  - Non-empty line at {i+1}: '{lines[i]}'")
@@ -55,6 +65,7 @@ def check_commit_format(commit_msg):
                             issues.append(f"  - Empty line at {i+1}")
     return issues
 
+
 def main():
     """Main function to check commit message format"""
     commit_msg = get_latest_commit_message()
@@ -81,5 +92,6 @@ def main():
         print("✅ Commit message formatting is correct!")
         return 0
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     sys.exit(main())
diff --git a/scripts/coccinelle/generation/check_for_atomic_calls.py b/scripts/coccinelle/generation/check_for_atomic_calls.py
index 5849f9643de5..9fb06ef2b6e1 100755
--- a/scripts/coccinelle/generation/check_for_atomic_calls.py
+++ b/scripts/coccinelle/generation/check_for_atomic_calls.py
@@ -28,27 +28,27 @@ parser = argparse.ArgumentParser(
     description="Generate a Coccinelle checker for atomic context in transitive callers of a target function."
 )
 parser.add_argument(
-    "--levels", "-l",
+    "--levels",
+    "-l",
     type=int,
     required=True,
-    help="Maximum number of transitive caller levels to follow (e.g., 5)"
+    help="Maximum number of transitive caller levels to follow (e.g., 5)",
 )
 parser.add_argument(
-    "--target", "-t",
+    "--target",
+    "-t",
     type=str,
     required=True,
-    help="Target function to trace (e.g., __find_get_block_slow)"
+    help="Target function to trace (e.g., __find_get_block_slow)",
 )
 parser.add_argument(
-    "--output", "-o",
-    type=str,
-    required=True,
-    help="Output .cocci file to generate"
+    "--output", "-o", type=str, required=True, help="Output .cocci file to generate"
 )
 args = parser.parse_args()
 max_depth = args.levels
 target_func = args.target
 
+
 # Add a function to get the number of processors for parallel jobs
 def get_nprocs():
     try:
@@ -56,6 +56,7 @@ def get_nprocs():
     except:
         return 1  # Default to 1 if can't determine
 
+
 outfile = args.output
 header = f"""// SPDX-License-Identifier: GPL-2.0
 /// Autogenerated by gen_atomic_context_chain.py
@@ -141,10 +142,11 @@ register_caller(fn, None)
 """
 with open(outfile, "w") as f:
     f.write(header)
-    
+
     # Generate all the caller chain rules
     for level in range(1, max_depth + 1):
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} caller discovery
 @caller{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -162,12 +164,14 @@ transitive_caller << virtual.transitive_caller;
 @@
 print(f"🔄 Chain level {level}: {{fn}} calls {{transitive_caller}} at {{p[0].file}}:{{p[0].line}}")
 register_caller(fn, p[0].file)
-""")
+"""
+        )
 
     # Check for atomic context in each caller in our chain
     for level in range(1, max_depth + 1):
         # First, check for common atomic primitives
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} atomic context check - Common atomic primitives
 @atomiccheck{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -246,10 +250,12 @@ key = (p1[0].file, p1[0].line, transitive_caller)
 if key not in seen_atomic:
     seen_atomic.add(key)
     print(f"⚠️  WARNING: atomic context at level {level}: {{p1[0].current_element}} at {{p1[0].file}}:{{p1[0].line}} may reach {{transitive_caller}}() → eventually {target_func}()")
-""")
+"""
+        )
 
         # Check for lock-related functions directly calling our chain
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} atomic context check - Lock-related functions
 @atomic_fn_check{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -275,10 +281,12 @@ transitive_caller << virtual.transitive_caller;
 atomic_keywords = ['lock', 'irq', 'atomic', 'bh', 'intr', 'preempt', 'disable', 'napi', 'rcu']
 if any(kw in lock_fn.lower() for kw in atomic_keywords):
     print(f"⚠️  WARNING: potential atomic function at level {level}: {{lock_fn}} (name suggests lock handling) contains call to {{transitive_caller}}() at {{p[0].file}}:{{p[0].line}} → eventually {target_func}()")
-""")
+"""
+        )
 
         # Check for spinlock regions
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} spinlock region check
 @spinlock_region{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -319,10 +327,12 @@ key = (p1[0].file, p1[0].line, p3[0].line, transitive_caller)
 if key not in seen_spinlock_regions:
     seen_spinlock_regions.add(key)
     print(f"⚠️  WARNING: spinlock region at level {level}: {{p1[0].current_element}} at {{p1[0].file}}:{{p1[0].line}} contains call to {{transitive_caller}}() at line {{p3[0].line}} → eventually {target_func}()")
-""")
+"""
+        )
 
         # Look for functions that can't sleep
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} check - Can't sleep contexts
 @cant_sleep{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -346,10 +356,12 @@ p2 << cant_sleep{level}.p2;
 transitive_caller << virtual.transitive_caller;
 @@
 print(f"⚠️  WARNING: Non-sleeping context at {{p1[0].file}}:{{p1[0].line}} but calls {{transitive_caller}}() at line {{p2[0].line}} → eventually {target_func}()")
-""")
+"""
+        )
 
         # Check for network driver contexts
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} check - Network driver contexts (commonly atomic)
 @netdriver{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -389,10 +401,12 @@ p2 << netdriver{level}.p2;
 transitive_caller << virtual.transitive_caller;
 @@
 print(f"⚠️  WARNING: Network driver context at {{p1[0].file}}:{{p1[0].line}} but calls {{transitive_caller}}() at line {{p2[0].line}} → eventually {target_func}()")
-""")
+"""
+        )
 
         # Check for functions that might call from atomic context by name
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} check - Function with name suggesting atomic context
 @atomic_name{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -411,10 +425,12 @@ p << atomic_name{level}.p;
 transitive_caller << virtual.transitive_caller;
 @@
 print(f"⚠️  WARNING: Function with atomic-suggesting name {{atomic_fn}} calls {{transitive_caller}}() at {{p[0].file}}:{{p[0].line}} → eventually {target_func}()")
-""")
+"""
+        )
 
         # Check for sleep-incompatible contexts but target function might sleep
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} check - Target function called in context where might_sleep is used
 @might_sleep_check{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -436,8 +452,11 @@ p2 << might_sleep_check{level}.p2;
 transitive_caller << virtual.transitive_caller;
 @@
 print(f"⚠️  WARNING: Function has might_sleep() at {{p1[0].file}}:{{p1[0].line}} but also calls {{transitive_caller}}() at line {{p2[0].line}} → eventually {target_func}()")
-""")
+"""
+        )
 
     f.write("\n")
 
-print(f"✅ Generated {outfile} with enhanced atomic checks for `{target_func}` up to {max_depth} levels. Run with: make coccicheck MODE=report COCCI={outfile} J={get_nprocs()}")
+print(
+    f"✅ Generated {outfile} with enhanced atomic checks for `{target_func}` up to {max_depth} levels. Run with: make coccicheck MODE=report COCCI={outfile} J={get_nprocs()}"
+)
diff --git a/scripts/coccinelle/generation/check_for_sleepy_calls.py b/scripts/coccinelle/generation/check_for_sleepy_calls.py
index 87bd5b264203..a32c9ee2c995 100755
--- a/scripts/coccinelle/generation/check_for_sleepy_calls.py
+++ b/scripts/coccinelle/generation/check_for_sleepy_calls.py
@@ -34,34 +34,35 @@ parser = argparse.ArgumentParser(
     description="Generate a Coccinelle checker to find sleeping functions called by a target function."
 )
 parser.add_argument(
-    "--function", "-f",
+    "--function",
+    "-f",
     type=str,
     required=True,
-    help="Target function to analyze (e.g., netif_rx_ni)"
+    help="Target function to analyze (e.g., netif_rx_ni)",
 )
 parser.add_argument(
-    "--max-depth", "-d",
+    "--max-depth",
+    "-d",
     type=int,
     default=3,
-    help="Maximum depth of function call chain to analyze (default: 3)"
+    help="Maximum depth of function call chain to analyze (default: 3)",
 )
 parser.add_argument(
-    "--output", "-o",
-    type=str,
-    required=True,
-    help="Output .cocci file to generate"
+    "--output", "-o", type=str, required=True, help="Output .cocci file to generate"
 )
 parser.add_argument(
-    "--sleepy-function", "-s",
+    "--sleepy-function",
+    "-s",
     type=str,
     default=None,
-    help="Specific function to check for that may cause sleeping (e.g., folio_wait_locked)"
+    help="Specific function to check for that may cause sleeping (e.g., folio_wait_locked)",
 )
 parser.add_argument(
-    "--expected", "-e",
+    "--expected",
+    "-e",
     action="store_true",
     default=False,
-    help="Indicate that the function is expected to have a sleep path (verified by manual inspection)"
+    help="Indicate that the function is expected to have a sleep path (verified by manual inspection)",
 )
 args = parser.parse_args()
 target_func = args.function
@@ -70,6 +71,7 @@ outfile = args.output
 sleepy_func = args.sleepy_function
 expected_to_sleep = args.expected
 
+
 # Add a function to get the number of processors for parallel jobs
 def get_nprocs():
     try:
@@ -77,17 +79,49 @@ def get_nprocs():
     except:
         return 1  # Default to 1 if can't determine
 
+
 # List of common functions known to sleep
 known_sleepy_functions = [
-    "msleep", "ssleep", "usleep_range", "schedule", "schedule_timeout",
-    "wait_event", "wait_for_completion", "mutex_lock", "down_read", "down_write",
-    "kthread_create", "kthread_run", "kmalloc", "__kmalloc", "kmem_cache_alloc", 
-    "vmalloc", "vzalloc", "kvmalloc", "kzalloc", "__vmalloc", "kvzalloc",
-    "sock_create", "sock_create_kern", "sock_create_lite", "sock_socket", 
-    "filp_open", "open_bdev_exclusive", "create_workqueue", 
-    "alloc_workqueue", "__alloc_workqueue_key", "request_threaded_irq",
-    "request_module", "try_module_get", "module_put", "printk", "GFP_KERNEL",
-    "copy_from_user", "copy_to_user", "__copy_from_user", "__copy_to_user"
+    "msleep",
+    "ssleep",
+    "usleep_range",
+    "schedule",
+    "schedule_timeout",
+    "wait_event",
+    "wait_for_completion",
+    "mutex_lock",
+    "down_read",
+    "down_write",
+    "kthread_create",
+    "kthread_run",
+    "kmalloc",
+    "__kmalloc",
+    "kmem_cache_alloc",
+    "vmalloc",
+    "vzalloc",
+    "kvmalloc",
+    "kzalloc",
+    "__vmalloc",
+    "kvzalloc",
+    "sock_create",
+    "sock_create_kern",
+    "sock_create_lite",
+    "sock_socket",
+    "filp_open",
+    "open_bdev_exclusive",
+    "create_workqueue",
+    "alloc_workqueue",
+    "__alloc_workqueue_key",
+    "request_threaded_irq",
+    "request_module",
+    "try_module_get",
+    "module_put",
+    "printk",
+    "GFP_KERNEL",
+    "copy_from_user",
+    "copy_to_user",
+    "__copy_from_user",
+    "__copy_to_user",
 ]
 
 # If a specific sleepy function is provided, only check for that one
@@ -97,9 +131,17 @@ if sleepy_func:
 
 # List of common GFP flags that indicate sleeping is allowed
 sleepy_gfp_flags = [
-    "GFP_KERNEL", "GFP_USER", "GFP_HIGHUSER", "GFP_DMA", 
-    "GFP_DMA32", "GFP_NOWAIT", "GFP_NOIO", "GFP_NOFS",
-    "__GFP_WAIT", "__GFP_IO", "__GFP_FS"
+    "GFP_KERNEL",
+    "GFP_USER",
+    "GFP_HIGHUSER",
+    "GFP_DMA",
+    "GFP_DMA32",
+    "GFP_NOWAIT",
+    "GFP_NOIO",
+    "GFP_NOFS",
+    "__GFP_WAIT",
+    "__GFP_IO",
+    "__GFP_FS",
 ]
 
 # Create a stats directory
@@ -111,8 +153,9 @@ with open(outfile, "w") as f:
     title = f"Detect if function '{target_func}' calls any functions that might sleep"
     if sleepy_func:
         title = f"Detect if function '{target_func}' calls '{sleepy_func}'"
-    
-    f.write(f"""// SPDX-License-Identifier: GPL-2.0
+
+    f.write(
+        f"""// SPDX-License-Identifier: GPL-2.0
 /// Autogenerated by check_for_sleepy_calls.py
 /// {title}
 // Options: --no-includes --include-headers
@@ -281,10 +324,12 @@ def save_stats():
     
     with open(stats_file, "w") as f:
         json.dump(stats, f, indent=2)
-""")
+"""
+    )
 
     # Define the rule to find direct calls to the target function
-    f.write(f"""
+    f.write(
+        f"""
 // Find direct function calls made by target function
 @find_calls@
 identifier fn;
@@ -305,11 +350,13 @@ total_calls_checked += 1
 register_call(target_func, fn, p[0].file, p[0].line)
 register_func_for_analysis(fn)
 save_stats()
-""")
+"""
+    )
 
     # Add direct checking for specific sleepy function if provided
     if sleepy_func:
-        f.write(f"""
+        f.write(
+            f"""
 // Direct check: Does target function call sleepy function directly?
 @direct_sleepy_call@
 position p;
@@ -327,12 +374,16 @@ global total_sleep_routines_checked
 total_sleep_routines_checked += 1
 register_sleep_point(target_func, sleepy_func, p[0].file, p[0].line, "directly calls target sleep function")
 save_stats()
-""")
-    
+"""
+        )
+
     # Generate rules for checking nested function calls
-    for depth in range(2, max_depth + 1):  # Start from 2 as level 1 is the direct call we already checked
+    for depth in range(
+        2, max_depth + 1
+    ):  # Start from 2 as level 1 is the direct call we already checked
         # Find functions called by functions at the previous level
-        f.write(f"""
+        f.write(
+            f"""
 // Level {depth} - Find functions called by level {depth-1} functions
 @find_calls_l{depth}@
 identifier fn1;
@@ -357,11 +408,13 @@ if fn1 in seen_funcs and fn1 != fn2:  # Avoid self-recursion
     register_call(fn1, fn2, p[0].file, p[0].line)
     register_func_for_analysis(fn2)
     save_stats()
-""")
+"""
+        )
 
         if sleepy_func:
             # If looking for a specific sleepy function, check at this level
-            f.write(f"""
+            f.write(
+                f"""
 // Level {depth} - Find calls to sleepy function
 @sleepy_call_l{depth}@
 identifier fn;
@@ -383,10 +436,12 @@ total_sleep_routines_checked += 1
 if fn in seen_funcs:
     register_sleep_point(fn, sleepy_func, p[0].file, p[0].line, f"level {depth} call to target sleep function")
     save_stats()
-""")
+"""
+            )
         else:
             # If doing general sleep checking, check for known sleepy functions at this level
-            f.write(f"""
+            f.write(
+                f"""
 // Level {depth} - Check for known sleepy functions
 @known_sleepers_l{depth}@
 identifier fn;
@@ -394,16 +449,22 @@ position p;
 @@
 fn(...) {{
   <...
-  (""")
+  ("""
+            )
             # Add all known sleepy functions to the pattern
             for i, sleepy_func_name in enumerate(known_sleepy_functions):
                 if i > 0:
-                    f.write(f"""
-  |""")
-                f.write(f"""
-  {sleepy_func_name}@p(...)""")
-            
-            f.write(f"""
+                    f.write(
+                        f"""
+  |"""
+                    )
+                f.write(
+                    f"""
+  {sleepy_func_name}@p(...)"""
+                )
+
+            f.write(
+                f"""
   )
   ...>
 }}
@@ -419,12 +480,14 @@ if fn in seen_funcs:
     sleep_func = p[0].current_element
     register_sleep_point(fn, sleep_func, p[0].file, p[0].line, f"level {depth} call to known sleeping function")
     save_stats()
-""")
+"""
+            )
 
     # Only add the other sleep detection rules if we're not constraining to a specific function
     if not sleepy_func:
         # Check for GFP_KERNEL and other sleepy allocation flags
-        f.write(f"""
+        f.write(
+            f"""
 // Check for sleepy memory allocation flags
 @check_sleepy_alloc@
 position p;
@@ -432,15 +495,21 @@ identifier fn;
 @@
 fn(...) {{
   <...
-  (""")
+  ("""
+        )
         # Add patterns for all sleepy GFP flags
         for i, flag in enumerate(sleepy_gfp_flags):
             if i > 0:
-                f.write(f"""
-  |""")
-            f.write(f"""
-  {flag}@p""")
-        f.write(f"""
+                f.write(
+                    f"""
+  |"""
+                )
+            f.write(
+                f"""
+  {flag}@p"""
+            )
+        f.write(
+            f"""
   )
   ...>
 }}
@@ -456,10 +525,12 @@ if fn in seen_funcs:
     flag = p[0].current_element
     register_sleep_point(fn, flag, p[0].file, p[0].line, "uses allocation flag that may sleep")
     save_stats()
-""")
+"""
+        )
 
         # Check for mutex locks
-        f.write(f"""
+        f.write(
+            f"""
 // Check for mutex locks
 @check_mutex@
 position p;
@@ -510,10 +581,12 @@ if fn in seen_funcs:
     lock_func = p[0].current_element
     register_sleep_point(fn, lock_func, p[0].file, p[0].line, "uses mutex or completion that may sleep")
     save_stats()
-""")
+"""
+        )
 
         # Check for might_sleep calls
-        f.write(f"""
+        f.write(
+            f"""
 // Check for explicit might_sleep calls
 @check_might_sleep@
 position p;
@@ -542,10 +615,12 @@ if fn in seen_funcs:
     sleep_func = p[0].current_element
     register_sleep_point(fn, sleep_func, p[0].file, p[0].line, "contains explicit might_sleep() call")
     save_stats()
-""")
+"""
+        )
 
         # Check for functions with names suggesting they might sleep
-        f.write(f"""
+        f.write(
+            f"""
 // Check for functions with sleep-suggesting names
 @check_sleep_names@
 position p;
@@ -574,10 +649,12 @@ if fn in seen_funcs:
             sleep_fn.startswith("local_")):
         register_sleep_point(fn, sleep_fn, p[0].file, p[0].line, "calls function with name suggesting it might sleep")
         save_stats()
-""")
+"""
+        )
 
     # Add a finalization rule that summarizes the findings
-    f.write(f"""
+    f.write(
+        f"""
 @finalize:python@
 @@
 # Save any final stats before finishing
@@ -669,7 +746,8 @@ try:
     shutil.rmtree(stats_dir)
 except Exception as e:
     print(f"Note: Could not clean up stats directory: {{e}}")
-""")
+"""
+    )
 
 msg = f"✅ Generated {outfile} to check if '{target_func}' might sleep"
 if sleepy_func:
diff --git a/scripts/detect_whitespace_issues.py b/scripts/detect_whitespace_issues.py
index 165a33e25bcb..de5fef707317 100755
--- a/scripts/detect_whitespace_issues.py
+++ b/scripts/detect_whitespace_issues.py
@@ -12,37 +12,40 @@ import os
 import sys
 from pathlib import Path
 
+
 def check_file_whitespace(file_path):
     """Check a single file for whitespace issues"""
     issues = []
 
     try:
-        with open(file_path, 'rb') as f:
+        with open(file_path, "rb") as f:
             content = f.read()
 
         # Skip binary files
-        if b'\0' in content:
+        if b"\0" in content:
             return issues
 
-        lines = content.decode('utf-8', errors='ignore').splitlines(keepends=True)
+        lines = content.decode("utf-8", errors="ignore").splitlines(keepends=True)
 
         # Check trailing whitespace
         for line_num, line in enumerate(lines, 1):
-            if line.rstrip('\n\r').endswith(' ') or line.rstrip('\n\r').endswith('\t'):
+            if line.rstrip("\n\r").endswith(" ") or line.rstrip("\n\r").endswith("\t"):
                 issues.append(f"Line {line_num}: Trailing whitespace")
 
         # Check missing newline at end of file
-        if content and not content.endswith(b'\n'):
+        if content and not content.endswith(b"\n"):
             issues.append("Missing newline at end of file")
 
         # Check for excessive blank lines (more than 2 consecutive)
         blank_count = 0
         for line_num, line in enumerate(lines, 1):
-            if line.strip() == '':
+            if line.strip() == "":
                 blank_count += 1
             else:
                 if blank_count > 2:
-                    issues.append(f"Line {line_num - blank_count}: {blank_count} consecutive blank lines")
+                    issues.append(
+                        f"Line {line_num - blank_count}: {blank_count} consecutive blank lines"
+                    )
                 blank_count = 0
 
     except Exception as e:
@@ -50,6 +53,7 @@ def check_file_whitespace(file_path):
 
     return issues
 
+
 def main():
     """Main function to scan for whitespace issues"""
     if len(sys.argv) > 1:
@@ -57,10 +61,15 @@ def main():
     else:
         # Default to git tracked files with modifications
         import subprocess
+
         try:
-            result = subprocess.run(['git', 'diff', '--name-only'],
-                                  capture_output=True, text=True, check=True)
-            paths = result.stdout.strip().split('\n') if result.stdout.strip() else []
+            result = subprocess.run(
+                ["git", "diff", "--name-only"],
+                capture_output=True,
+                text=True,
+                check=True,
+            )
+            paths = result.stdout.strip().split("\n") if result.stdout.strip() else []
             if not paths:
                 print("No modified files found in git")
                 return
@@ -82,7 +91,7 @@ def main():
 
         if path.is_file():
             # Skip certain file types
-            if path.suffix in ['.pyc', '.so', '.o', '.bin', '.jpg', '.png', '.gif']:
+            if path.suffix in [".pyc", ".so", ".o", ".bin", ".jpg", ".png", ".gif"]:
                 continue
 
             issues = check_file_whitespace(path)
@@ -93,7 +102,9 @@ def main():
                 for issue in issues:
                     print(f"  ⚠️  {issue}")
 
-    print(f"\nSummary: {total_issues} whitespace issues found in {files_with_issues} files")
+    print(
+        f"\nSummary: {total_issues} whitespace issues found in {files_with_issues} files"
+    )
 
     if total_issues > 0:
         print("\nTo fix these issues:")
@@ -105,5 +116,6 @@ def main():
         print("✅ No whitespace issues found!")
         return 0
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     sys.exit(main())
diff --git a/scripts/fix_whitespace_issues.py b/scripts/fix_whitespace_issues.py
index 3e69ea5082ba..585ff8b9d54b 100755
--- a/scripts/fix_whitespace_issues.py
+++ b/scripts/fix_whitespace_issues.py
@@ -12,19 +12,20 @@ import os
 import sys
 from pathlib import Path
 
+
 def fix_file_whitespace(file_path):
     """Fix whitespace issues in a single file"""
     issues_fixed = []
 
     try:
-        with open(file_path, 'rb') as f:
+        with open(file_path, "rb") as f:
             content = f.read()
 
         # Skip binary files
-        if b'\0' in content:
+        if b"\0" in content:
             return issues_fixed
 
-        original_content = content.decode('utf-8', errors='ignore')
+        original_content = content.decode("utf-8", errors="ignore")
         lines = original_content.splitlines(keepends=True)
         modified = False
 
@@ -33,12 +34,12 @@ def fix_file_whitespace(file_path):
         for line_num, line in enumerate(lines, 1):
             original_line = line
             # Remove trailing whitespace but preserve line endings
-            if line.endswith('\r\n'):
-                cleaned_line = line.rstrip(' \t\r\n') + '\r\n'
-            elif line.endswith('\n'):
-                cleaned_line = line.rstrip(' \t\n') + '\n'
+            if line.endswith("\r\n"):
+                cleaned_line = line.rstrip(" \t\r\n") + "\r\n"
+            elif line.endswith("\n"):
+                cleaned_line = line.rstrip(" \t\n") + "\n"
             else:
-                cleaned_line = line.rstrip(' \t')
+                cleaned_line = line.rstrip(" \t")
 
             if original_line != cleaned_line:
                 issues_fixed.append(f"Line {line_num}: Removed trailing whitespace")
@@ -52,7 +53,7 @@ def fix_file_whitespace(file_path):
         i = 0
         while i < len(new_lines):
             line = new_lines[i]
-            if line.strip() == '':
+            if line.strip() == "":
                 blank_count += 1
                 if blank_count <= 2:
                     final_lines.append(line)
@@ -65,15 +66,15 @@ def fix_file_whitespace(file_path):
             i += 1
 
         # Fix missing newline at end of file
-        new_content = ''.join(final_lines)
-        if new_content and not new_content.endswith('\n'):
-            new_content += '\n'
+        new_content = "".join(final_lines)
+        if new_content and not new_content.endswith("\n"):
+            new_content += "\n"
             issues_fixed.append("Added missing newline at end of file")
             modified = True
 
         # Write back if modified
         if modified:
-            with open(file_path, 'w', encoding='utf-8') as f:
+            with open(file_path, "w", encoding="utf-8") as f:
                 f.write(new_content)
 
     except Exception as e:
@@ -81,6 +82,7 @@ def fix_file_whitespace(file_path):
 
     return issues_fixed
 
+
 def main():
     """Main function to fix whitespace issues"""
     if len(sys.argv) > 1:
@@ -88,10 +90,15 @@ def main():
     else:
         # Default to git tracked files with modifications
         import subprocess
+
         try:
-            result = subprocess.run(['git', 'diff', '--name-only'],
-                                  capture_output=True, text=True, check=True)
-            paths = result.stdout.strip().split('\n') if result.stdout.strip() else []
+            result = subprocess.run(
+                ["git", "diff", "--name-only"],
+                capture_output=True,
+                text=True,
+                check=True,
+            )
+            paths = result.stdout.strip().split("\n") if result.stdout.strip() else []
             if not paths:
                 print("No modified files found in git")
                 return 0
@@ -113,7 +120,7 @@ def main():
 
         if path.is_file():
             # Skip certain file types
-            if path.suffix in ['.pyc', '.so', '.o', '.bin', '.jpg', '.png', '.gif']:
+            if path.suffix in [".pyc", ".so", ".o", ".bin", ".jpg", ".png", ".gif"]:
                 continue
 
             fixes = fix_file_whitespace(path)
@@ -133,5 +140,6 @@ def main():
         print("✅ No whitespace issues found to fix!")
         return 0
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     sys.exit(main())
diff --git a/scripts/generate_refs.py b/scripts/generate_refs.py
index 4bd179b9ddfd..45b743481f79 100755
--- a/scripts/generate_refs.py
+++ b/scripts/generate_refs.py
@@ -120,7 +120,7 @@ def _ref_generator_choices_static(args, conf_name, ref_name, ref_help):
         # Add "_USER_REF" suffix to avoid static duplicates when both user
         # and default Kconfig files exists. Fixes 'warning: choice value used
         # outside its choice group'
-        if 'refs' in args and args.refs != 0:
+        if "refs" in args and args.refs != 0:
             conf_name = conf_name + "_USER_REF"
         refs.update({ref_name: conf_name})
         f.write("config {}\n".format(conf_name))
@@ -328,7 +328,9 @@ def kreleases(args) -> None:
             for release in data["releases"]:
                 if release["moniker"] == args.moniker:
                     # Check if release.json is aa.bb.cc type
-                    if re.compile(r'^\d+\.\d+(\.\d+|-rc\d+)?$').match(release["version"]):
+                    if re.compile(r"^\d+\.\d+(\.\d+|-rc\d+)?$").match(
+                        release["version"]
+                    ):
                         reflist.append("v" + release["version"])
                     else:
                         reflist.append(release["version"])
diff --git a/scripts/honey-badger.py b/scripts/honey-badger.py
index 29c3966bf7ac..43dd45c2c252 100755
--- a/scripts/honey-badger.py
+++ b/scripts/honey-badger.py
@@ -21,16 +21,18 @@ KERNEL_PPA_URL = "https://kernel.ubuntu.com/mainline/"
 ARCH = "amd64"
 KERNEL_DIR = "/tmp/kernels"
 
+
 def is_dpkg_installed():
-    return shutil.which('dpkg') is not None
+    return shutil.which("dpkg") is not None
+
 
 def extract_deb(deb_file, tempdir, verbose=False, dest="/"):
     try:
         if verbose:
             print(f"Extracting {deb_file} onto {tempdir}")
         # Extract the ar archive
-        subprocess.run(['ar', 'x', deb_file], check=True, cwd=tempdir)
-        data_tarball = next(f for f in os.listdir(tempdir) if f.startswith('data.tar'))
+        subprocess.run(["ar", "x", deb_file], check=True, cwd=tempdir)
+        data_tarball = next(f for f in os.listdir(tempdir) if f.startswith("data.tar"))
         # Extract the data tarball to the correct locations
         with tarfile.open(os.path.join(tempdir, data_tarball)) as tar:
             tar.extractall(path=dest)
@@ -43,7 +45,8 @@ def extract_deb(deb_file, tempdir, verbose=False, dest="/"):
         if os.path.exists(tempdir):
             if verbose:
                 print(f"Removing temporary directory {tempdir}")
-            subprocess.run(['rm', '-rf', tempdir])
+            subprocess.run(["rm", "-rf", tempdir])
+
 
 def install_kernel_packages(package_files, verbose=False, use_ar=False, dest="/"):
     if use_ar:
@@ -56,49 +59,66 @@ def install_kernel_packages(package_files, verbose=False, use_ar=False, dest="/"
         for package in package_files:
             if verbose:
                 print("Running: dpkg %s" % package)
-            subprocess.run(['sudo', 'dpkg', '-i', package], check=True)
+            subprocess.run(["sudo", "dpkg", "-i", package], check=True)
+
 
 def parse_version(version):
-    match = re.match(r'v(\d+)\.(\d+)(?:\.(\d+))?(?:-(rc\d+))?', version)
+    match = re.match(r"v(\d+)\.(\d+)(?:\.(\d+))?(?:-(rc\d+))?", version)
     if match:
         major, minor, patch, rc = match.groups()
-        return (int(major), int(minor), int(patch) if patch else 0, rc if rc else '')
-    return (0, 0, 0, '')
+        return (int(major), int(minor), int(patch) if patch else 0, rc if rc else "")
+    return (0, 0, 0, "")
+
 
 def get_kernel_versions():
     response = requests.get(KERNEL_PPA_URL)
     response.raise_for_status()
-    soup = BeautifulSoup(response.text, 'html.parser')
-    versions = [href.strip('/') for link in soup.find_all('a') if (href := link.get('href')).startswith('v') and href.endswith('/')]
+    soup = BeautifulSoup(response.text, "html.parser")
+    versions = [
+        href.strip("/")
+        for link in soup.find_all("a")
+        if (href := link.get("href")).startswith("v") and href.endswith("/")
+    ]
 
     versions = sorted(versions, key=parse_version, reverse=True)
     return group_versions(versions)
 
+
 def group_versions(versions):
     grouped = []
-    for key, group in groupby(versions, lambda x: (parse_version(x)[0], parse_version(x)[1])):
+    for key, group in groupby(
+        versions, lambda x: (parse_version(x)[0], parse_version(x)[1])
+    ):
         group = sorted(group, key=parse_version, reverse=True)
         grouped.append(list(group))
     return grouped
 
+
 def verify_kernel_files(version):
     url = f"{KERNEL_PPA_URL}{version}/{ARCH}/"
     response = requests.get(url)
     if response.status_code != 200:
         return False
-    soup = BeautifulSoup(response.text, 'html.parser')
-    files = [a['href'] for a in soup.find_all('a') if a['href'].endswith('.deb')]
-    if any('linux-image-unsigned' in f for f in files) and any('linux-modules' in f for f in files):
+    soup = BeautifulSoup(response.text, "html.parser")
+    files = [a["href"] for a in soup.find_all("a") if a["href"].endswith(".deb")]
+    if any("linux-image-unsigned" in f for f in files) and any(
+        "linux-modules" in f for f in files
+    ):
         return True
     return False
 
+
 def download_and_install(file_type, version, verbose=False, use_ar=False, dest="/"):
     url = f"{KERNEL_PPA_URL}{version}/{ARCH}/"
     response = requests.get(url)
     response.raise_for_status()  # Ensure we raise an exception for failed requests
-    soup = BeautifulSoup(response.text, 'html.parser')
+    soup = BeautifulSoup(response.text, "html.parser")
 
-    deb_files = [a['href'] for a in soup.find_all('a') if a['href'].endswith('.deb') and file_type in a['href']]
+    deb_files = [
+        a["href"]
+        for a in soup.find_all("a")
+        if a["href"].endswith(".deb") and file_type in a["href"]
+    ]
     local_deb_files = []
 
     if not os.path.exists(KERNEL_DIR):
@@ -111,7 +131,7 @@ def download_and_install(file_type, version, verbose=False, use_ar=False, dest="
             print(f"Attempting to download from {full_url}...")
         r = requests.get(full_url, stream=True)
         if r.status_code == 200:
-            with open(local_file, 'wb') as f:
+            with open(local_file, "wb") as f:
                 f.write(r.content)
             if verbose:
                 print(f"Downloaded {local_file}.")
@@ -122,16 +142,38 @@ def download_and_install(file_type, version, verbose=False, use_ar=False, dest="
 
     install_kernel_packages(local_deb_files, verbose, use_ar, dest)
 
+
 def main():
     dpkg_installed = is_dpkg_installed()
     parser = argparse.ArgumentParser(description="Linux stable kernel honey badger")
-    parser.add_argument('--list', action='store_true', help='List available kernels')
-    parser.add_argument('--install', action='store_true', help='Install specified number of latest kernels')
-    parser.add_argument('--dest', type=str, help='Install the packages into the specified directory')
-    parser.add_argument('--verbose', action='store_true', help='Enable verbose output')
-    parser.add_argument('--use-ar', action='store_true', default=dpkg_installed, help='Do not use dpkg even if present')
-    parser.add_argument('--use-file', type=str, help='Skip download and just install this debian package file')
-    parser.add_argument('-c', '--count', type=int, default=1, help='Number of kernels to list or install')
+    parser.add_argument("--list", action="store_true", help="List available kernels")
+    parser.add_argument(
+        "--install",
+        action="store_true",
+        help="Install specified number of latest kernels",
+    )
+    parser.add_argument(
+        "--dest", type=str, help="Install the packages into the specified directory"
+    )
+    parser.add_argument("--verbose", action="store_true", help="Enable verbose output")
+    parser.add_argument(
+        "--use-ar",
+        action="store_true",
+        default=dpkg_installed,
+        help="Do not use dpkg even if present",
+    )
+    parser.add_argument(
+        "--use-file",
+        type=str,
+        help="Skip download and just install this debian package file",
+    )
+    parser.add_argument(
+        "-c",
+        "--count",
+        type=int,
+        default=1,
+        help="Number of kernels to list or install",
+    )
     args = parser.parse_args()
 
     kernel_versions_grouped = get_kernel_versions()
@@ -145,7 +187,9 @@ def main():
             break
         version = group[0]  # Pick the latest version in the group
         if args.verbose:
-            print(f"Verifying files for {version} at {KERNEL_PPA_URL}{version}/{ARCH}/...")
+            print(
+                f"Verifying files for {version} at {KERNEL_PPA_URL}{version}/{ARCH}/..."
+            )
         if verify_kernel_files(version):
             valid_versions.append(version)
         else:
@@ -157,7 +201,7 @@ def main():
         return
 
     if args.use_file and args.install:
-        pkgs = [ args.use_file ]
+        pkgs = [args.use_file]
         install_kernel_packages(pkgs, args.verbose, args.use_ar, args.dest)
         return
 
@@ -165,9 +209,12 @@ def main():
         for version in valid_versions:
             if args.verbose:
                 print(f"Installing kernel version {version}...")
-            files = ['linux-modules', 'linux-image-unsigned', 'linux-headers']
+            files = ["linux-modules", "linux-image-unsigned", "linux-headers"]
             for file_type in files:
-                download_and_install(file_type, version, args.verbose, args.use_ar, args.dest)
+                download_and_install(
+                    file_type, version, args.verbose, args.use_ar, args.dest
+                )
+
 
 if __name__ == "__main__":
     main()
diff --git a/scripts/spdxcheck.py b/scripts/spdxcheck.py
index d6fb62e24c40..3a1f3a95fe8a 100755
--- a/scripts/spdxcheck.py
+++ b/scripts/spdxcheck.py
@@ -11,31 +11,35 @@ import git
 import re
 import os
 
+
 class ParserException(Exception):
     def __init__(self, tok, txt):
         self.tok = tok
         self.txt = txt
 
+
 class SPDXException(Exception):
     def __init__(self, el, txt):
         self.el = el
         self.txt = txt
 
+
 class SPDXdata(object):
     def __init__(self):
         self.license_files = 0
         self.exception_files = 0
-        self.licenses = [ ]
-        self.exceptions = { }
+        self.licenses = []
+        self.exceptions = {}
+
 
 # Read the spdx data from the LICENSES directory
 def read_spdxdata(repo):
 
     # The subdirectories of LICENSES in the kernel source
     # Note: exceptions needs to be parsed as last directory.
-    #license_dirs = [ "preferred", "dual", "deprecated", "exceptions" ]
-    license_dirs = [ "preferred" ]
-    lictree = repo.head.commit.tree['LICENSES']
+    # license_dirs = [ "preferred", "dual", "deprecated", "exceptions" ]
+    license_dirs = ["preferred"]
+    lictree = repo.head.commit.tree["LICENSES"]
 
     spdx = SPDXdata()
 
@@ -46,50 +50,65 @@ def read_spdxdata(repo):
 
             exception = None
             for l in open(el.path).readlines():
-                if l.startswith('Valid-License-Identifier:'):
-                    lid = l.split(':')[1].strip().upper()
+                if l.startswith("Valid-License-Identifier:"):
+                    lid = l.split(":")[1].strip().upper()
                     if lid in spdx.licenses:
-                        raise SPDXException(el, 'Duplicate License Identifier: %s' %lid)
+                        raise SPDXException(
+                            el, "Duplicate License Identifier: %s" % lid
+                        )
                     else:
                         spdx.licenses.append(lid)
 
-                elif l.startswith('SPDX-Exception-Identifier:'):
-                    exception = l.split(':')[1].strip().upper()
+                elif l.startswith("SPDX-Exception-Identifier:"):
+                    exception = l.split(":")[1].strip().upper()
                     spdx.exceptions[exception] = []
 
-                elif l.startswith('SPDX-Licenses:'):
-                    for lic in l.split(':')[1].upper().strip().replace(' ', '').replace('\t', '').split(','):
+                elif l.startswith("SPDX-Licenses:"):
+                    for lic in (
+                        l.split(":")[1]
+                        .upper()
+                        .strip()
+                        .replace(" ", "")
+                        .replace("\t", "")
+                        .split(",")
+                    ):
                         if not lic in spdx.licenses:
-                            raise SPDXException(None, 'Exception %s missing license %s' %(exception, lic))
+                            raise SPDXException(
+                                None,
+                                "Exception %s missing license %s" % (exception, lic),
+                            )
                         spdx.exceptions[exception].append(lic)
 
                 elif l.startswith("License-Text:"):
                     if exception:
                         if not len(spdx.exceptions[exception]):
-                            raise SPDXException(el, 'Exception %s is missing SPDX-Licenses' %exception)
+                            raise SPDXException(
+                                el, "Exception %s is missing SPDX-Licenses" % exception
+                            )
                         spdx.exception_files += 1
                     else:
                         spdx.license_files += 1
                     break
     return spdx
 
+
 class id_parser(object):
 
-    reserved = [ 'AND', 'OR', 'WITH' ]
-    tokens = [ 'LPAR', 'RPAR', 'ID', 'EXC' ] + reserved
+    reserved = ["AND", "OR", "WITH"]
+    tokens = ["LPAR", "RPAR", "ID", "EXC"] + reserved
 
-    precedence = ( ('nonassoc', 'AND', 'OR'), )
+    precedence = (("nonassoc", "AND", "OR"),)
 
-    t_ignore = ' \t'
+    t_ignore = " \t"
 
     def __init__(self, spdx):
         self.spdx = spdx
         self.lasttok = None
         self.lastid = None
-        self.lexer = lex.lex(module = self, reflags = re.UNICODE)
+        self.lexer = lex.lex(module=self, reflags=re.UNICODE)
         # Initialize the parser. No debug file and no parser rules stored on disk
         # The rules are small enough to be generated on the fly
-        self.parser = yacc.yacc(module = self, write_tables = False, debug = False)
+        self.parser = yacc.yacc(module=self, write_tables=False, debug=False)
         self.lines_checked = 0
         self.checked = 0
         self.spdx_valid = 0
@@ -100,93 +119,95 @@ class id_parser(object):
     # Validate License and Exception IDs
     def validate(self, tok):
         id = tok.value.upper()
-        if tok.type == 'ID':
+        if tok.type == "ID":
             if not id in self.spdx.licenses:
-                raise ParserException(tok, 'Invalid License ID')
+                raise ParserException(tok, "Invalid License ID")
             self.lastid = id
-        elif tok.type == 'EXC':
+        elif tok.type == "EXC":
             if id not in self.spdx.exceptions:
-                raise ParserException(tok, 'Invalid Exception ID')
+                raise ParserException(tok, "Invalid Exception ID")
             if self.lastid not in self.spdx.exceptions[id]:
-                raise ParserException(tok, 'Exception not valid for license %s' %self.lastid)
+                raise ParserException(
+                    tok, "Exception not valid for license %s" % self.lastid
+                )
             self.lastid = None
-        elif tok.type != 'WITH':
+        elif tok.type != "WITH":
             self.lastid = None
 
     # Lexer functions
     def t_RPAR(self, tok):
-        r'\)'
+        r"\)"
         self.lasttok = tok.type
         return tok
 
     def t_LPAR(self, tok):
-        r'\('
+        r"\("
         self.lasttok = tok.type
         return tok
 
     def t_ID(self, tok):
-        r'[A-Za-z.0-9\-+]+'
+        r"[A-Za-z.0-9\-+]+"
 
-        if self.lasttok == 'EXC':
+        if self.lasttok == "EXC":
             print(tok)
-            raise ParserException(tok, 'Missing parentheses')
+            raise ParserException(tok, "Missing parentheses")
 
         tok.value = tok.value.strip()
         val = tok.value.upper()
 
         if val in self.reserved:
             tok.type = val
-        elif self.lasttok == 'WITH':
-            tok.type = 'EXC'
+        elif self.lasttok == "WITH":
+            tok.type = "EXC"
 
         self.lasttok = tok.type
         self.validate(tok)
         return tok
 
     def t_error(self, tok):
-        raise ParserException(tok, 'Invalid token')
+        raise ParserException(tok, "Invalid token")
 
     def p_expr(self, p):
-        '''expr : ID
-                | ID WITH EXC
-                | expr AND expr
-                | expr OR expr
-                | LPAR expr RPAR'''
+        """expr : ID
+        | ID WITH EXC
+        | expr AND expr
+        | expr OR expr
+        | LPAR expr RPAR"""
         pass
 
     def p_error(self, p):
         if not p:
-            raise ParserException(None, 'Unfinished license expression')
+            raise ParserException(None, "Unfinished license expression")
         else:
-            raise ParserException(p, 'Syntax error')
+            raise ParserException(p, "Syntax error")
 
     def parse(self, expr):
         self.lasttok = None
         self.lastid = None
-        self.parser.parse(expr, lexer = self.lexer)
+        self.parser.parse(expr, lexer=self.lexer)
 
     def parse_lines(self, fd, maxlines, fname):
         self.checked += 1
         self.curline = 0
         try:
             for line in fd:
-                line = line.decode(locale.getpreferredencoding(False), errors='ignore')
+                line = line.decode(locale.getpreferredencoding(False), errors="ignore")
                 self.curline += 1
                 if self.curline > maxlines:
                     break
                 self.lines_checked += 1
                 if line.find("SPDX-License-Identifier:") < 0:
                     continue
-                expr = line.split(':')[1].strip()
+                expr = line.split(":")[1].strip()
                 # Remove trailing comment closure
-                if line.strip().endswith('*/'):
-                    expr = expr.rstrip('*/').strip()
+                if line.strip().endswith("*/"):
+                    expr = expr.rstrip("*/").strip()
                 # Remove trailing xml comment closure
-                if line.strip().endswith('-->'):
-                    expr = expr.rstrip('-->').strip()
+                if line.strip().endswith("-->"):
+                    expr = expr.rstrip("-->").strip()
                 # Special case for SH magic boot code files
-                if line.startswith('LIST \"'):
-                    expr = expr.rstrip('\"').strip()
+                if line.startswith('LIST "'):
+                    expr = expr.rstrip('"').strip()
                 self.parse(expr)
                 self.spdx_valid += 1
                 #
@@ -199,11 +220,14 @@ class id_parser(object):
             if pe.tok:
                 col = line.find(expr) + pe.tok.lexpos
                 tok = pe.tok.value
-                sys.stdout.write('%s: %d:%d %s: %s\n' %(fname, self.curline, col, pe.txt, tok))
+                sys.stdout.write(
+                    "%s: %d:%d %s: %s\n" % (fname, self.curline, col, pe.txt, tok)
+                )
             else:
-                sys.stdout.write('%s: %d:0 %s\n' %(fname, self.curline, col, pe.txt))
+                sys.stdout.write("%s: %d:0 %s\n" % (fname, self.curline, col, pe.txt))
             self.spdx_errors += 1
 
+
 def scan_git_tree(tree):
     for el in tree.traverse():
         # Exclude stuff which would make pointless noise
@@ -214,25 +238,38 @@ def scan_git_tree(tree):
             continue
         if not os.path.isfile(el.path):
             continue
-        with open(el.path, 'rb') as fd:
+        with open(el.path, "rb") as fd:
             parser.parse_lines(fd, args.maxlines, el.path)
 
+
 def scan_git_subtree(tree, path):
-    for p in path.strip('/').split('/'):
+    for p in path.strip("/").split("/"):
         tree = tree[p]
     scan_git_tree(tree)
 
-if __name__ == '__main__':
 
-    ap = ArgumentParser(description='SPDX expression checker')
-    ap.add_argument('path', nargs='*', help='Check path or file. If not given full git tree scan. For stdin use "-"')
-    ap.add_argument('-m', '--maxlines', type=int, default=15,
-                    help='Maximum number of lines to scan in a file. Default 15')
-    ap.add_argument('-v', '--verbose', action='store_true', help='Verbose statistics output')
+if __name__ == "__main__":
+
+    ap = ArgumentParser(description="SPDX expression checker")
+    ap.add_argument(
+        "path",
+        nargs="*",
+        help='Check path or file. If not given full git tree scan. For stdin use "-"',
+    )
+    ap.add_argument(
+        "-m",
+        "--maxlines",
+        type=int,
+        default=15,
+        help="Maximum number of lines to scan in a file. Default 15",
+    )
+    ap.add_argument(
+        "-v", "--verbose", action="store_true", help="Verbose statistics output"
+    )
     args = ap.parse_args()
 
     # Sanity check path arguments
-    if '-' in args.path and len(args.path) > 1:
+    if "-" in args.path and len(args.path) > 1:
         sys.stderr.write('stdin input "-" must be the only path argument\n')
         sys.exit(1)
 
@@ -249,49 +286,49 @@ if __name__ == '__main__':
 
     except SPDXException as se:
         if se.el:
-            sys.stderr.write('%s: %s\n' %(se.el.path, se.txt))
+            sys.stderr.write("%s: %s\n" % (se.el.path, se.txt))
         else:
-            sys.stderr.write('%s\n' %se.txt)
+            sys.stderr.write("%s\n" % se.txt)
         sys.exit(1)
 
     except Exception as ex:
-        sys.stderr.write('FAIL: %s\n' %ex)
-        sys.stderr.write('%s\n' %traceback.format_exc())
+        sys.stderr.write("FAIL: %s\n" % ex)
+        sys.stderr.write("%s\n" % traceback.format_exc())
         sys.exit(1)
 
     try:
-        if len(args.path) and args.path[0] == '-':
-            stdin = os.fdopen(sys.stdin.fileno(), 'rb')
-            parser.parse_lines(stdin, args.maxlines, '-')
+        if len(args.path) and args.path[0] == "-":
+            stdin = os.fdopen(sys.stdin.fileno(), "rb")
+            parser.parse_lines(stdin, args.maxlines, "-")
         else:
             if args.path:
                 for p in args.path:
                     if os.path.isfile(p):
-                        parser.parse_lines(open(p, 'rb'), args.maxlines, p)
+                        parser.parse_lines(open(p, "rb"), args.maxlines, p)
                     elif os.path.isdir(p):
                         scan_git_subtree(repo.head.reference.commit.tree, p)
                     else:
-                        sys.stderr.write('path %s does not exist\n' %p)
+                        sys.stderr.write("path %s does not exist\n" % p)
                         sys.exit(1)
             else:
                 # Full git tree scan
                 scan_git_tree(repo.head.commit.tree)
 
             if args.verbose:
-                sys.stderr.write('\n')
-                sys.stderr.write('License files:     %12d\n' %spdx.license_files)
-                sys.stderr.write('Exception files:   %12d\n' %spdx.exception_files)
-                sys.stderr.write('License IDs        %12d\n' %len(spdx.licenses))
-                sys.stderr.write('Exception IDs      %12d\n' %len(spdx.exceptions))
-                sys.stderr.write('\n')
-                sys.stderr.write('Files checked:     %12d\n' %parser.checked)
-                sys.stderr.write('Lines checked:     %12d\n' %parser.lines_checked)
-                sys.stderr.write('Files with SPDX:   %12d\n' %parser.spdx_valid)
-                sys.stderr.write('Files with errors: %12d\n' %parser.spdx_errors)
+                sys.stderr.write("\n")
+                sys.stderr.write("License files:     %12d\n" % spdx.license_files)
+                sys.stderr.write("Exception files:   %12d\n" % spdx.exception_files)
+                sys.stderr.write("License IDs        %12d\n" % len(spdx.licenses))
+                sys.stderr.write("Exception IDs      %12d\n" % len(spdx.exceptions))
+                sys.stderr.write("\n")
+                sys.stderr.write("Files checked:     %12d\n" % parser.checked)
+                sys.stderr.write("Lines checked:     %12d\n" % parser.lines_checked)
+                sys.stderr.write("Files with SPDX:   %12d\n" % parser.spdx_valid)
+                sys.stderr.write("Files with errors: %12d\n" % parser.spdx_errors)
 
             sys.exit(0)
 
     except Exception as ex:
-        sys.stderr.write('FAIL: %s\n' %ex)
-        sys.stderr.write('%s\n' %traceback.format_exc())
+        sys.stderr.write("FAIL: %s\n" % ex)
+        sys.stderr.write("%s\n" % traceback.format_exc())
         sys.exit(1)
diff --git a/scripts/update_ssh_config_guestfs.py b/scripts/update_ssh_config_guestfs.py
index 4d178d498f04..8b212a9c15a2 100755
--- a/scripts/update_ssh_config_guestfs.py
+++ b/scripts/update_ssh_config_guestfs.py
@@ -30,6 +30,7 @@ ssh_template = """Host {name} {addr}
 	LogLevel FATAL
 """
 
+
 # We take the first IPv4 address on the first non-loopback interface.
 def get_addr(name):
     attempt = 0
@@ -38,7 +39,15 @@ def get_addr(name):
         if attempt > 60:
             raise Exception(f"Unable to get an address for {name} after 60s")
 
-        result = subprocess.run(['/usr/bin/virsh','qemu-agent-command',name,'{"execute":"guest-network-get-interfaces"}'], capture_output=True)
+        result = subprocess.run(
+            [
+                "/usr/bin/virsh",
+                "qemu-agent-command",
+                name,
+                '{"execute":"guest-network-get-interfaces"}',
+            ],
+            capture_output=True,
+        )
         # Did it error out? Sleep and try again.
         if result.returncode != 0:
             time.sleep(1)
@@ -48,15 +57,15 @@ def get_addr(name):
         netinfo = json.loads(result.stdout)
 
         ret = None
-        for iface in netinfo['return']:
-            if iface['name'] == 'lo':
+        for iface in netinfo["return"]:
+            if iface["name"] == "lo":
                 continue
-            if 'ip-addresses' not in iface:
+            if "ip-addresses" not in iface:
                 continue
-            for addr in iface['ip-addresses']:
-                if addr['ip-address-type'] != 'ipv4':
+            for addr in iface["ip-addresses"]:
+                if addr["ip-address-type"] != "ipv4":
                     continue
-                ret = addr['ip-address']
+                ret = addr["ip-address"]
                 break
 
         # If we didn't get an address, try again
@@ -64,11 +73,12 @@ def get_addr(name):
             return ret
         time.sleep(1)
 
+
 def main():
-    topdir = os.environ.get('TOPDIR', '.')
+    topdir = os.environ.get("TOPDIR", ".")
 
     # load extra_vars
-    with open(f'{topdir}/extra_vars.yaml') as stream:
+    with open(f"{topdir}/extra_vars.yaml") as stream:
         extra_vars = yaml.safe_load(stream)
 
     # slurp in the guestfs_nodes list
@@ -76,23 +86,28 @@ def main():
         nodes = yaml.safe_load(stream)
 
     if extra_vars.get("topdir_path_has_sha256sum", False):
-        ssh_config = f'{Path.home()}/.ssh/config_kdevops_{extra_vars["topdir_path_sha256sum"]}'
+        ssh_config = (
+            f'{Path.home()}/.ssh/config_kdevops_{extra_vars["topdir_path_sha256sum"]}'
+        )
     else:
-        ssh_config = f'{Path.home()}/.ssh/config_kdevops_{extra_vars["kdevops_host_prefix"]}'
+        ssh_config = (
+            f'{Path.home()}/.ssh/config_kdevops_{extra_vars["kdevops_host_prefix"]}'
+        )
 
     # make a stanza for each node
-    sshconf = open(ssh_config, 'w')
-    for node in nodes['guestfs_nodes']:
-        name = node['name']
+    sshconf = open(ssh_config, "w")
+    for node in nodes["guestfs_nodes"]:
+        name = node["name"]
         addr = get_addr(name)
         context = {
-            "name" : name,
-            "addr" : addr,
-            "sshkey" : f"{extra_vars['guestfs_path']}/{name}/ssh/id_ed25519"
+            "name": name,
+            "addr": addr,
+            "sshkey": f"{extra_vars['guestfs_path']}/{name}/ssh/id_ed25519",
         }
         sshconf.write(ssh_template.format(**context))
     sshconf.close()
     os.chmod(ssh_config, 0o600)
 
+
 if __name__ == "__main__":
     main()
diff --git a/scripts/workflows/blktests/blktests_watchdog.py b/scripts/workflows/blktests/blktests_watchdog.py
index 0cf7af986edf..2dd4444b0ef9 100755
--- a/scripts/workflows/blktests/blktests_watchdog.py
+++ b/scripts/workflows/blktests/blktests_watchdog.py
@@ -15,11 +15,14 @@ import configparser
 import argparse
 from itertools import chain
 
+
 def print_blktest_host_status(host, verbose, basedir, config):
     kernel = kssh.get_uname(host).rstrip()
     section = blktests.get_section(host, config)
-    (last_test, last_test_time, current_time_str, delta_seconds, stall_suspect) = blktests.get_blktest_host(host, basedir, kernel, section, config)
-    checktime =  blktests.get_last_run_time(host, basedir, kernel, section, last_test)
+    (last_test, last_test_time, current_time_str, delta_seconds, stall_suspect) = (
+        blktests.get_blktest_host(host, basedir, kernel, section, config)
+    )
+    checktime = blktests.get_last_run_time(host, basedir, kernel, section, last_test)
 
     percent_done = 0
     if checktime > 0:
@@ -38,12 +41,26 @@ def print_blktest_host_status(host, verbose, basedir, config):
             sys.stdout.write("Last    test       : None\n")
         else:
             percent_done_str = "%.0f%%" % (0)
-            sys.stdout.write("%35s%20s%20s%20s%20s%15s%30s\n" % (host, "None", percent_done_str, 0, 0, stall_str, kernel))
+            sys.stdout.write(
+                "%35s%20s%20s%20s%20s%15s%30s\n"
+                % (host, "None", percent_done_str, 0, 0, stall_str, kernel)
+            )
         return
 
     if not verbose:
         percent_done_str = "%.0f%%" % (percent_done)
-        sys.stdout.write("%35s%20s%20s%20s%20s%15s%30s\n" % (host, last_test, percent_done_str, str(delta_seconds), str(checktime), stall_str, kernel))
+        sys.stdout.write(
+            "%35s%20s%20s%20s%20s%15s%30s\n"
+            % (
+                host,
+                last_test,
+                percent_done_str,
+                str(delta_seconds),
+                str(checktime),
+                stall_str,
+                kernel,
+            )
+        )
         return
 
     sys.stdout.write("Host               : %s\n" % (host))
@@ -62,23 +79,37 @@ def print_blktest_host_status(host, verbose, basedir, config):
         sys.stdout.write("OK")
     sys.stdout.write("\n")
 
+
 def _main():
-    parser = argparse.ArgumentParser(description='blktest-watchdog')
-    parser.add_argument('hostfile', metavar='<ansible hostfile>', type=str,
-                        default='hosts',
-                        help='Ansible hostfile to use')
-    parser.add_argument('hostsection', metavar='<ansible hostsection>', type=str,
-                        default='baseline',
-                        help='The name of the section to read hosts from')
-    parser.add_argument('--verbose', const=True, default=False, action="store_const",
-                        help='Be verbose on otput.')
+    parser = argparse.ArgumentParser(description="blktest-watchdog")
+    parser.add_argument(
+        "hostfile",
+        metavar="<ansible hostfile>",
+        type=str,
+        default="hosts",
+        help="Ansible hostfile to use",
+    )
+    parser.add_argument(
+        "hostsection",
+        metavar="<ansible hostsection>",
+        type=str,
+        default="baseline",
+        help="The name of the section to read hosts from",
+    )
+    parser.add_argument(
+        "--verbose",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Be verbose on otput.",
+    )
     args = parser.parse_args()
 
     if not os.path.isfile(args.hostfile):
         sys.stdout.write("%s does not exist\n" % (args.hostfile))
         sys.exit(1)
 
-    dotconfig = os.path.dirname(os.path.abspath(args.hostfile)) + '/.config'
+    dotconfig = os.path.dirname(os.path.abspath(args.hostfile)) + "/.config"
     config = blktests.get_config(dotconfig)
     if not config:
         sys.stdout.write("%s does not exist\n" % (dotconfig))
@@ -86,9 +117,21 @@ def _main():
     basedir = os.path.dirname(dotconfig)
 
     hosts = blktests.get_hosts(args.hostfile, args.hostsection)
-    sys.stdout.write("%35s%20s%20s%20s%20s%15s%30s\n" % ("Hostname", "Test-name", "Completion %", "runtime(s)", "last-runtime(s)", "Stall-status", "Kernel"))
+    sys.stdout.write(
+        "%35s%20s%20s%20s%20s%15s%30s\n"
+        % (
+            "Hostname",
+            "Test-name",
+            "Completion %",
+            "runtime(s)",
+            "last-runtime(s)",
+            "Stall-status",
+            "Kernel",
+        )
+    )
     for h in hosts:
         print_blktest_host_status(h, args.verbose, basedir, config)
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     ret = _main()
diff --git a/scripts/workflows/cxl/gen_qemu_cxl.py b/scripts/workflows/cxl/gen_qemu_cxl.py
index 778567ea5ff3..92755f12247f 100755
--- a/scripts/workflows/cxl/gen_qemu_cxl.py
+++ b/scripts/workflows/cxl/gen_qemu_cxl.py
@@ -3,114 +3,213 @@
 import argparse
 import os
 
+
 def qemu_print(kind, value, last=False):
     global args
-    if args.format == 'xml':
-        print('<qemu:arg value=\'%s\'/>' % kind)
-        print('<qemu:arg value=\'%s\'/>' % value)
+    if args.format == "xml":
+        print("<qemu:arg value='%s'/>" % kind)
+        print("<qemu:arg value='%s'/>" % value)
     else:
-        print('%s %s %s' % (kind, value, '' if last else '\\'))
+        print("%s %s %s" % (kind, value, "" if last else "\\"))
+
 
 def host_bridge(hb_id, bus, addr):
-    return 'pxb-cxl,bus=pcie.0,id=cxl.%d,bus_nr=0x%x,addr=0x%x' % (hb_id, bus, addr)
+    return "pxb-cxl,bus=pcie.0,id=cxl.%d,bus_nr=0x%x,addr=0x%x" % (hb_id, bus, addr)
+
 
 def root_port(rp_id, hb_id, port, slot):
-    return 'cxl-rp,port=%d,bus=cxl.%d,id=cxl_rp%d,chassis=0,slot=%d' % (port, hb_id, rp_id, slot)
+    return "cxl-rp,port=%d,bus=cxl.%d,id=cxl_rp%d,chassis=0,slot=%d" % (
+        port,
+        hb_id,
+        rp_id,
+        slot,
+    )
+
 
 def switch(rp_id):
-    return 'cxl-upstream,bus=cxl_rp%d,id=cxl_switch%d,addr=0.0,multifunction=on' % (rp_id, rp_id)
+    return "cxl-upstream,bus=cxl_rp%d,id=cxl_switch%d,addr=0.0,multifunction=on" % (
+        rp_id,
+        rp_id,
+    )
+
 
 def mailbox(rp_id):
-    return 'cxl-switch-mailbox-cci,bus=cxl_rp%d,addr=0.1,target=cxl_switch%d' % (rp_id, rp_id)
+    return "cxl-switch-mailbox-cci,bus=cxl_rp%d,addr=0.1,target=cxl_switch%d" % (
+        rp_id,
+        rp_id,
+    )
+
 
 def downstream_port(dport_id, dport, rp_id, slot):
-    return 'cxl-downstream,port=%d,bus=cxl_switch%d,id=cxl_dport%d,chassis=0,slot=%d' % (dport, rp_id, dport_id, slot)
+    return (
+        "cxl-downstream,port=%d,bus=cxl_switch%d,id=cxl_dport%d,chassis=0,slot=%d"
+        % (dport, rp_id, dport_id, slot)
+    )
+
 
 def memdev(dport_id, path, sizestr, sizeval, create):
-    filename = '%s/cxl_mem%d.raw' % (path, dport_id)
-    if not(os.path.exists(filename)) and create:
-        if not(os.path.exists(path)):
-            print('ERROR: Tried to create memdev file but directory %s does not exist.' % path)
+    filename = "%s/cxl_mem%d.raw" % (path, dport_id)
+    if not (os.path.exists(filename)) and create:
+        if not (os.path.exists(path)):
+            print(
+                "ERROR: Tried to create memdev file but directory %s does not exist."
+                % path
+            )
             exit(1)
         os.umask(0)
-        with open(filename, 'wb') as file:
+        with open(filename, "wb") as file:
             file.truncate(sizeval)
-    return 'memory-backend-file,id=cxl_memdev%d,share=on,mem-path=%s,size=%s' % (dport_id, filename, sizestr)
+    return "memory-backend-file,id=cxl_memdev%d,share=on,mem-path=%s,size=%s" % (
+        dport_id,
+        filename,
+        sizestr,
+    )
+
 
 def lsa(path, create):
-    filename = '%s/cxl_lsa.raw' % path
-    if not(os.path.exists(filename)) and create:
-        if not(os.path.exists(path)):
-            print('ERROR: Tried to create lsa file but directory %s does not exist.' % path)
+    filename = "%s/cxl_lsa.raw" % path
+    if not (os.path.exists(filename)) and create:
+        if not (os.path.exists(path)):
+            print(
+                "ERROR: Tried to create lsa file but directory %s does not exist."
+                % path
+            )
             exit(1)
         os.umask(0)
-        with open(filename, 'wb') as file:
+        with open(filename, "wb") as file:
             file.truncate(256 * 1024 * 1024)
-    return 'memory-backend-file,id=cxl_lsa,share=on,mem-path=%s,size=256M' % filename
+    return "memory-backend-file,id=cxl_lsa,share=on,mem-path=%s,size=256M" % filename
 
 
 def type3(dport_id):
-    return 'cxl-type3,bus=cxl_dport%d,memdev=cxl_memdev%d,lsa=cxl_lsa,id=cxl_mem%d' % (dport_id, dport_id, dport_id)
+    return "cxl-type3,bus=cxl_dport%d,memdev=cxl_memdev%d,lsa=cxl_lsa,id=cxl_mem%d" % (
+        dport_id,
+        dport_id,
+        dport_id,
+    )
+
 
 def fmw(num_hb):
-    s = ''
+    s = ""
     for hb in range(num_hb):
-        s += 'cxl-fmw.0.targets.%d=cxl.%d,' % (hb, hb)
-    return s + 'cxl-fmw.0.size=8G,cxl-fmw.0.interleave-granularity=256'
-
-parser = argparse.ArgumentParser(description='QEMU CXL configuration generator', usage='%(prog)s [options]')
-parser.add_argument('-m', '--memdev-path', dest='memdev_path',
-                    help='Path to location of backing memdev files', required=True)
-parser.add_argument('-s', '--size', dest='size',
-                    help='Size of each memory device in bytes (i.e. 512M, 16G)', required=True)
-parser.add_argument('-c', '--create-memdev-files', dest='create_memdevs',
-                    help='Create memdev file if not found', action='store_true', default=False)
-parser.add_argument('-f', '--format', dest='format',
-                    help='Format of QEMU args',
-                    default='cmdline', choices=['cmdline', 'xml'])
-parser.add_argument('-b', '--host-bridges', dest='num_hb',
-                    help='Number of host bridges',
-                    type=int, default=1, choices=range(1,5))
-parser.add_argument('-r', '--root-ports', dest='num_rp',
-                    help='Number of root ports per host bridge',
-                    type=int, default=1, choices=range(1,5))
-parser.add_argument('-d', '--downstream-ports', dest='num_dport',
-                    help='Number of downstream ports per switch',
-                    type=int, default=1, choices=range(1,9))
-parser.add_argument('-p', '--pci-bus-number', dest='bus_nr',
-                    help='PCI bus number for first host bridge (default: 0x38)',
-                    type=int, default=0x38)
-parser.add_argument('--bus-alloc-per-host-bridge', dest='bus_per',
-                    help='Number of PCI buses to allocate per host bridge (default: 16)',
-                    type=int, default=16)
-parser.add_argument('--pci-function-number', dest='func_nr',
-                    help='Starting PCI function number for host bridges on root PCI bus 0 (default: 9)',
-                    type=int, default=9)
+        s += "cxl-fmw.0.targets.%d=cxl.%d," % (hb, hb)
+    return s + "cxl-fmw.0.size=8G,cxl-fmw.0.interleave-granularity=256"
+
+
+parser = argparse.ArgumentParser(
+    description="QEMU CXL configuration generator", usage="%(prog)s [options]"
+)
+parser.add_argument(
+    "-m",
+    "--memdev-path",
+    dest="memdev_path",
+    help="Path to location of backing memdev files",
+    required=True,
+)
+parser.add_argument(
+    "-s",
+    "--size",
+    dest="size",
+    help="Size of each memory device in bytes (i.e. 512M, 16G)",
+    required=True,
+)
+parser.add_argument(
+    "-c",
+    "--create-memdev-files",
+    dest="create_memdevs",
+    help="Create memdev file if not found",
+    action="store_true",
+    default=False,
+)
+parser.add_argument(
+    "-f",
+    "--format",
+    dest="format",
+    help="Format of QEMU args",
+    default="cmdline",
+    choices=["cmdline", "xml"],
+)
+parser.add_argument(
+    "-b",
+    "--host-bridges",
+    dest="num_hb",
+    help="Number of host bridges",
+    type=int,
+    default=1,
+    choices=range(1, 5),
+)
+parser.add_argument(
+    "-r",
+    "--root-ports",
+    dest="num_rp",
+    help="Number of root ports per host bridge",
+    type=int,
+    default=1,
+    choices=range(1, 5),
+)
+parser.add_argument(
+    "-d",
+    "--downstream-ports",
+    dest="num_dport",
+    help="Number of downstream ports per switch",
+    type=int,
+    default=1,
+    choices=range(1, 9),
+)
+parser.add_argument(
+    "-p",
+    "--pci-bus-number",
+    dest="bus_nr",
+    help="PCI bus number for first host bridge (default: 0x38)",
+    type=int,
+    default=0x38,
+)
+parser.add_argument(
+    "--bus-alloc-per-host-bridge",
+    dest="bus_per",
+    help="Number of PCI buses to allocate per host bridge (default: 16)",
+    type=int,
+    default=16,
+)
+parser.add_argument(
+    "--pci-function-number",
+    dest="func_nr",
+    help="Starting PCI function number for host bridges on root PCI bus 0 (default: 9)",
+    type=int,
+    default=9,
+)
 
 args = parser.parse_args()
 
-suffix_dict = {'M': 1024 ** 2, 'G': 1024 ** 3}
+suffix_dict = {"M": 1024**2, "G": 1024**3}
 suffix = args.size[-1].upper()
-if not(suffix in suffix_dict):
-    print('ERROR: size must end in M (for MiB) or G (for GiB)')
+if not (suffix in suffix_dict):
+    print("ERROR: size must end in M (for MiB) or G (for GiB)")
     exit(1)
 size = int(args.size[:-1]) * suffix_dict[suffix]
 
 slot = 0
-qemu_print('-machine', 'cxl=on')
-qemu_print('-object', lsa(args.memdev_path, args.create_memdevs))
+qemu_print("-machine", "cxl=on")
+qemu_print("-object", lsa(args.memdev_path, args.create_memdevs))
 for hb in range(args.num_hb):
-    qemu_print('-device', host_bridge(hb, args.bus_nr + hb * args.bus_per, hb + args.func_nr))
+    qemu_print(
+        "-device", host_bridge(hb, args.bus_nr + hb * args.bus_per, hb + args.func_nr)
+    )
     for rp in range(args.num_rp):
         rp_id = hb * args.num_rp + rp
-        qemu_print('-device', root_port(rp_id, hb, rp, slot))
+        qemu_print("-device", root_port(rp_id, hb, rp, slot))
         slot += 1
-        qemu_print('-device', switch(rp_id))
-        qemu_print('-device', mailbox(rp_id))
+        qemu_print("-device", switch(rp_id))
+        qemu_print("-device", mailbox(rp_id))
         for dport in range(args.num_dport):
-            dport_id = rp_id * args.num_dport + dport;
-            qemu_print('-device', downstream_port(dport_id, dport, rp_id, slot))
+            dport_id = rp_id * args.num_dport + dport
+            qemu_print("-device", downstream_port(dport_id, dport, rp_id, slot))
             slot += 1
-            qemu_print('-object', memdev(dport_id, args.memdev_path, args.size, size, args.create_memdevs))
-            qemu_print('-device', type3(dport_id))
-qemu_print('-M', fmw(args.num_hb), last=True)
+            qemu_print(
+                "-object",
+                memdev(
+                    dport_id, args.memdev_path, args.size, size, args.create_memdevs
+                ),
+            )
+            qemu_print("-device", type3(dport_id))
+qemu_print("-M", fmw(args.num_hb), last=True)
diff --git a/scripts/workflows/fstests/fstests_watchdog.py b/scripts/workflows/fstests/fstests_watchdog.py
index 3fef54843b9a..beb229e68fad 100755
--- a/scripts/workflows/fstests/fstests_watchdog.py
+++ b/scripts/workflows/fstests/fstests_watchdog.py
@@ -17,6 +17,7 @@ import configparser
 import argparse
 from itertools import chain
 
+
 def print_fstest_host_status(host, verbose, use_remote, use_ssh, basedir, config):
     if "CONFIG_DEVCONFIG_ENABLE_SYSTEMD_JOURNAL_REMOTE" in config and not use_ssh:
         configured_kernel = None
@@ -35,8 +36,11 @@ def print_fstest_host_status(host, verbose, use_remote, use_ssh, basedir, config
         kernel = kssh.get_uname(host).rstrip()
 
     section = fstests.get_section(host, config)
-    (last_test, last_test_time, current_time_str, delta_seconds, stall_suspect) = \
-        fstests.get_fstest_host(use_remote, use_ssh, host, basedir, kernel, section, config)
+    (last_test, last_test_time, current_time_str, delta_seconds, stall_suspect) = (
+        fstests.get_fstest_host(
+            use_remote, use_ssh, host, basedir, kernel, section, config
+        )
+    )
 
     checktime = fstests.get_checktime(host, basedir, kernel, section, last_test)
     percent_done = (delta_seconds * 100 / checktime) if checktime > 0 else 0
@@ -49,10 +53,9 @@ def print_fstest_host_status(host, verbose, use_remote, use_ssh, basedir, config
             stall_str = "Hung-Stalled"
 
     crash_state = "OK"
-    watchdog = KernelCrashWatchdog(host_name=host,
-                                   decode_crash=True,
-                                   reset_host=True,
-                                   save_warnings=True)
+    watchdog = KernelCrashWatchdog(
+        host_name=host, decode_crash=True, reset_host=True, save_warnings=True
+    )
     crash_file, warning_file = watchdog.check_and_reset_host()
     if crash_file:
         crash_state = "CRASH"
@@ -60,7 +63,9 @@ def print_fstest_host_status(host, verbose, use_remote, use_ssh, basedir, config
         crash_state = "WARNING"
 
     if not verbose:
-        soak_duration_seconds = int(config.get("CONFIG_FSTESTS_SOAK_DURATION", '0').strip('"'))
+        soak_duration_seconds = int(
+            config.get("CONFIG_FSTESTS_SOAK_DURATION", "0").strip('"')
+        )
         uses_soak = fstests.fstests_test_uses_soak_duration(last_test or "")
         is_soaking = uses_soak and soak_duration_seconds != 0
         soaking_str = "(soak)" if is_soaking else ""
@@ -83,30 +88,56 @@ def print_fstest_host_status(host, verbose, use_remote, use_ssh, basedir, config
     sys.stdout.write("Delta: %d total second\n" % (delta_seconds))
     sys.stdout.write("\t%d minutes\n" % (delta_seconds / 60))
     sys.stdout.write("\t%d seconds\n" % (delta_seconds % 60))
-    sys.stdout.write("Timeout-status: %s\n" % ("POSSIBLE-STALL" if stall_suspect else "OK"))
+    sys.stdout.write(
+        "Timeout-status: %s\n" % ("POSSIBLE-STALL" if stall_suspect else "OK")
+    )
     sys.stdout.write("Crash-status  : %s\n" % crash_state)
 
+
 def _main():
-    parser = argparse.ArgumentParser(description='fstest-watchdog')
-    parser.add_argument('hostfile', metavar='<ansible hostfile>', type=str,
-                        default='hosts',
-                        help='Ansible hostfile to use')
-    parser.add_argument('hostsection', metavar='<ansible hostsection>', type=str,
-                        default='baseline',
-                        help='The name of the section to read hosts from')
-    parser.add_argument('--verbose', const=True, default=False, action="store_const",
-                        help='Be verbose on output.')
-    parser.add_argument('--use-systemd-remote', const=True, default=True, action="store_const",
-                        help='Use systemd-remote uploaded journals if available')
-    parser.add_argument('--use-ssh', const=True, default=False, action="store_const",
-                        help='Force to only use ssh for journals.')
+    parser = argparse.ArgumentParser(description="fstest-watchdog")
+    parser.add_argument(
+        "hostfile",
+        metavar="<ansible hostfile>",
+        type=str,
+        default="hosts",
+        help="Ansible hostfile to use",
+    )
+    parser.add_argument(
+        "hostsection",
+        metavar="<ansible hostsection>",
+        type=str,
+        default="baseline",
+        help="The name of the section to read hosts from",
+    )
+    parser.add_argument(
+        "--verbose",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Be verbose on output.",
+    )
+    parser.add_argument(
+        "--use-systemd-remote",
+        const=True,
+        default=True,
+        action="store_const",
+        help="Use systemd-remote uploaded journals if available",
+    )
+    parser.add_argument(
+        "--use-ssh",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Force to only use ssh for journals.",
+    )
     args = parser.parse_args()
 
     if not os.path.isfile(args.hostfile):
         sys.stdout.write("%s does not exist\n" % (args.hostfile))
         sys.exit(1)
 
-    dotconfig = os.path.dirname(os.path.abspath(args.hostfile)) + '/.config'
+    dotconfig = os.path.dirname(os.path.abspath(args.hostfile)) + "/.config"
     config = fstests.get_config(dotconfig)
     if not config:
         sys.stdout.write("%s does not exist\n" % (dotconfig))
@@ -119,11 +150,16 @@ def _main():
         if group is not None:
             remote_gid = group[2]
             if remote_gid not in os.getgrouplist(os.getlogin(), os.getgid()):
-                sys.stderr.write("Your username is not part of the group %s\n" % remote_group)
+                sys.stderr.write(
+                    "Your username is not part of the group %s\n" % remote_group
+                )
                 sys.stderr.write("Fix this and try again")
                 sys.exit(1)
         else:
-            sys.stderr.write("The group %s was not found, add Kconfig support for the systemd-remote-journal group used" % remote_group)
+            sys.stderr.write(
+                "The group %s was not found, add Kconfig support for the systemd-remote-journal group used"
+                % remote_group
+            )
             sys.exit(1)
 
     hosts = fstests.get_hosts(args.hostfile, args.hostsection)
@@ -133,13 +169,13 @@ def _main():
         f"{'Kernel':<38}  {'Crash-status':<10}\n"
     )
     for h in hosts:
-        print_fstest_host_status(h, args.verbose,
-                                 args.use_systemd_remote,
-                                 args.use_ssh,
-                                 basedir,
-                                 config)
+        print_fstest_host_status(
+            h, args.verbose, args.use_systemd_remote, args.use_ssh, basedir, config
+        )
 
-    soak_duration_seconds = int(config.get("CONFIG_FSTESTS_SOAK_DURATION", '0').strip('"'))
+    soak_duration_seconds = int(
+        config.get("CONFIG_FSTESTS_SOAK_DURATION", "0").strip('"')
+    )
     journal_method = "ssh"
     if "CONFIG_DEVCONFIG_ENABLE_SYSTEMD_JOURNAL_REMOTE" in config and not args.use_ssh:
         journal_method = "systemd-journal-remote"
@@ -147,5 +183,6 @@ def _main():
     sys.stdout.write("\n%25s%20s\n" % ("Journal-method", "Soak-duration(s)"))
     sys.stdout.write("%25s%20d\n" % (journal_method, soak_duration_seconds))
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     ret = _main()
diff --git a/scripts/workflows/generic/crash_report.py b/scripts/workflows/generic/crash_report.py
index 10b4f958042d..65a05343d2ce 100755
--- a/scripts/workflows/generic/crash_report.py
+++ b/scripts/workflows/generic/crash_report.py
@@ -104,6 +104,8 @@ def generate_commit_log():
 
 if __name__ == "__main__":
     if not CRASH_DIR.exists():
-        print(f"No crashes, filesystem corruption isues, or kernel warnings were detected on this run.")
+        print(
+            f"No crashes, filesystem corruption isues, or kernel warnings were detected on this run."
+        )
         exit(0)
     generate_commit_log()
diff --git a/scripts/workflows/generic/crash_watchdog.py b/scripts/workflows/generic/crash_watchdog.py
index f271cf9c9adf..15e9251350ca 100755
--- a/scripts/workflows/generic/crash_watchdog.py
+++ b/scripts/workflows/generic/crash_watchdog.py
@@ -24,13 +24,16 @@ logging.basicConfig(
 )
 logger = logging.getLogger("crash_watchdog")
 
+
 def get_active_hosts():
     """Get the list of active hosts from kdevops configuration."""
     try:
         # First try to get the hosts from the ansible inventory
         result = subprocess.run(
             ["ansible-inventory", "-i", "hosts", "--list"],
-            capture_output=True, text=True, check=True
+            capture_output=True,
+            text=True,
+            check=True,
         )
         inventory = yaml.safe_load(result.stdout)
         hosts = inventory.get("baseline", {}).get("hosts", [])
@@ -39,6 +42,7 @@ def get_active_hosts():
         logger.error(f"Error getting active hosts: {e}")
         return []
 
+
 def run_crash_watchdog_on_host(args, this_host_name):
     watchdog = KernelCrashWatchdog(
         host_name=this_host_name,
@@ -46,13 +50,15 @@ def run_crash_watchdog_on_host(args, this_host_name):
         full_log=args.full_log,
         decode_crash=not args.no_decode,
         reset_host=not args.no_reset,
-        save_warnings = args.save_warnings,
+        save_warnings=args.save_warnings,
     )
 
     crashed = False
     warnings_found = False
 
-    crash_file, warning_file = watchdog.check_and_reset_host(method=args.method, get_fstests_log=args.fstests_log)
+    crash_file, warning_file = watchdog.check_and_reset_host(
+        method=args.method, get_fstests_log=args.fstests_log
+    )
 
     if warning_file:
         logger.warning(f"Kernel warning and logged to {warning_file}")
@@ -66,6 +72,7 @@ def run_crash_watchdog_on_host(args, this_host_name):
         logger.debug(f"No crash detected for host {this_host_name}")
     return crashed, [crash_file], warnings_found, warning_file
 
+
 def run_crash_watchdog_all_hosts(args):
     """Check all active hosts for kernel crashes."""
     hosts = get_active_hosts()
@@ -74,12 +81,12 @@ def run_crash_watchdog_all_hosts(args):
     warnings_detected = False
     warning_files = []
 
-    logger.info(
-        f"Checking {len(hosts)} hosts for kernel crashes: {', '.join(hosts)}"
-    )
+    logger.info(f"Checking {len(hosts)} hosts for kernel crashes: {', '.join(hosts)}")
 
     for host in hosts:
-        host_crash_detected, crash_file, host_warnings_detected, warnings_file = run_crash_watchdog_on_host(args, host)
+        host_crash_detected, crash_file, host_warnings_detected, warnings_file = (
+            run_crash_watchdog_on_host(args, host)
+        )
         if host_crash_detected and crash_file:
             crash_detected = True
             crash_files.append(crash_file)
@@ -87,10 +94,13 @@ def run_crash_watchdog_all_hosts(args):
         if host_warnings_detected and warnings_file:
             warnings_detected = True
             warning_files.append(warning_file)
-            logger.warning(f"Kernel warning found on host {host}, logs saved to {warning_file}")
+            logger.warning(
+                f"Kernel warning found on host {host}, logs saved to {warning_file}"
+            )
 
     return crash_detected, crash_files, warnings_detected, warning_files
 
+
 def write_log_section(f, title, files, label):
     f.write(f"# {title}\n\n")
     for path in files:
@@ -102,6 +112,7 @@ def write_log_section(f, title, files, label):
         except Exception as e:
             f.write(f"\nError reading {label.lower()} file: {e}\n\n")
 
+
 def main():
     parser = argparse.ArgumentParser(
         description="Detect and handle kernel crashes or kernel warnings in hosts.",
@@ -138,22 +149,45 @@ Examples:
   Get all kernel warnings only:
     ./crash_watchdog.py e3-ext4-2k --method remote --save-warnings sad.warn
         """,
-        formatter_class=argparse.RawTextHelpFormatter
+        formatter_class=argparse.RawTextHelpFormatter,
     )
 
-    parser.add_argument("--host-name", help="Optional name of the host to check", default="all")
-    parser.add_argument("--output-dir", help="Directory to store crash logs", default="crashes")
+    parser.add_argument(
+        "--host-name", help="Optional name of the host to check", default="all"
+    )
+    parser.add_argument(
+        "--output-dir", help="Directory to store crash logs", default="crashes"
+    )
     parser.add_argument(
         "--method",
         choices=["auto", "remote", "console", "ssh"],
         default="auto",
-        help="Choose method to collect logs: auto, remote, console, or ssh"
+        help="Choose method to collect logs: auto, remote, console, or ssh",
+    )
+    parser.add_argument(
+        "--full-log",
+        action="store_true",
+        help="Get full kernel log instead of only crash context",
+    )
+    parser.add_argument(
+        "--no-decode",
+        action="store_true",
+        help="Disable decoding crash logs with decode_stacktrace.sh",
+    )
+    parser.add_argument(
+        "--no-reset",
+        action="store_true",
+        help="Do not reset the guest even if a crash is detected",
+    )
+    parser.add_argument(
+        "--fstests-log",
+        help="Show all kernel log lines for a specific fstests test ID (e.g., generic/750)",
+    )
+    parser.add_argument(
+        "--save-warnings",
+        help="Do you want detected and save kernel warnings",
+        default=True,
     )
-    parser.add_argument("--full-log", action="store_true", help="Get full kernel log instead of only crash context")
-    parser.add_argument("--no-decode", action="store_true", help="Disable decoding crash logs with decode_stacktrace.sh")
-    parser.add_argument("--no-reset", action="store_true", help="Do not reset the guest even if a crash is detected")
-    parser.add_argument("--fstests-log", help="Show all kernel log lines for a specific fstests test ID (e.g., generic/750)")
-    parser.add_argument("--save-warnings", help="Do you want detected and save kernel warnings", default=True)
     args = parser.parse_args()
     crash_files = []
     warnings_files = []
@@ -164,10 +198,14 @@ Examples:
         args.save_warnings = False
         args.full_log_mode = True
 
-    if (args.host_name != "all"):
-        crash_detected, crash_files, warnings_detected, warnings_files = run_crash_watchdog_on_host(args, args.host_name)
+    if args.host_name != "all":
+        crash_detected, crash_files, warnings_detected, warnings_files = (
+            run_crash_watchdog_on_host(args, args.host_name)
+        )
     else:
-        crash_detected, crash_files, warnings_detected, warnings_files = run_crash_watchdog_all_hosts(args)
+        crash_detected, crash_files, warnings_detected, warnings_files = (
+            run_crash_watchdog_all_hosts(args)
+        )
 
     if warnings_detected:
         logger.warning("Kernel warnings detected in one or more hosts")
diff --git a/scripts/workflows/lib/blktests.py b/scripts/workflows/lib/blktests.py
index 6f1568547464..7945c7505d7a 100644
--- a/scripts/workflows/lib/blktests.py
+++ b/scripts/workflows/lib/blktests.py
@@ -7,18 +7,21 @@ import argparse
 import re
 from itertools import chain
 
+
 class BlktestsError(Exception):
     pass
 
+
 def blktests_check_pid(host):
     pid = kssh.first_process_name_pid(host, "check")
     if pid <= 0:
         return pid
-    dir = "/proc/" + str(pid)  + "/cwd/tests"
+    dir = "/proc/" + str(pid) + "/cwd/tests"
     if kssh.dir_exists(host, dir):
         return pid
     return 0
 
+
 def get_blktest_host(host, basedir, kernel, section, config):
     stall_suspect = False
     if kernel == "Uname-issue":
@@ -39,7 +42,7 @@ def get_blktest_host(host, basedir, kernel, section, config):
     last_test_time = latest_dmesg_blktest_line.split("at ")[1].rstrip()
     current_time_str = kssh.get_current_time(host).rstrip()
 
-    blktests_date_str_format = '%Y-%m-%d %H:%M:%S'
+    blktests_date_str_format = "%Y-%m-%d %H:%M:%S"
     d1 = datetime.strptime(last_test_time, blktests_date_str_format)
     d2 = datetime.strptime(current_time_str, blktests_date_str_format)
 
@@ -49,20 +52,26 @@ def get_blktest_host(host, basedir, kernel, section, config):
     if "CONFIG_BLKTESTS_WATCHDOG" not in config:
         enable_watchdog = False
     else:
-        enable_watchdog = config["CONFIG_BLKTESTS_WATCHDOG"].strip('\"')
+        enable_watchdog = config["CONFIG_BLKTESTS_WATCHDOG"].strip('"')
 
     if enable_watchdog:
-        max_new_test_time = config["CONFIG_BLKTESTS_WATCHDOG_MAX_NEW_TEST_TIME"].strip('\"')
+        max_new_test_time = config["CONFIG_BLKTESTS_WATCHDOG_MAX_NEW_TEST_TIME"].strip(
+            '"'
+        )
         max_new_test_time = int(max_new_test_time)
         if not max_new_test_time:
             max_new_test_time = 60
 
-        hung_multiplier_long_tests = config["CONFIG_BLKTESTS_WATCHDOG_HUNG_MULTIPLIER_LONG_TESTS"].strip('\"')
+        hung_multiplier_long_tests = config[
+            "CONFIG_BLKTESTS_WATCHDOG_HUNG_MULTIPLIER_LONG_TESTS"
+        ].strip('"')
         hung_multiplier_long_tests = int(hung_multiplier_long_tests)
         if not hung_multiplier_long_tests:
             hung_multiplier_long_tests = 10
 
-        hung_fast_test_max_time = config["CONFIG_BLKTESTS_WATCHDOG_HUNG_FAST_TEST_MAX_TIME"].strip('\"')
+        hung_fast_test_max_time = config[
+            "CONFIG_BLKTESTS_WATCHDOG_HUNG_FAST_TEST_MAX_TIME"
+        ].strip('"')
         hung_fast_test_max_time = int(hung_fast_test_max_time)
         if not hung_fast_test_max_time:
             hung_fast_test_max_time = 5
@@ -83,16 +92,21 @@ def get_blktest_host(host, basedir, kernel, section, config):
         # If a test typically takes between 1 second to 30 seconds we can likely
         # safely assume the system has crashed after hung_fast_test_max_time
         # minutes
-        elif last_run_time_s >  0:
+        elif last_run_time_s > 0:
             suspect_crash_time_seconds = 60 * hung_fast_test_max_time
 
-        if delta_seconds >= suspect_crash_time_seconds and 'blktestsstart/000' not in last_test and 'blktestsend/000' not in last_test:
+        if (
+            delta_seconds >= suspect_crash_time_seconds
+            and "blktestsstart/000" not in last_test
+            and "blktestsend/000" not in last_test
+        ):
             stall_suspect = True
 
     return (last_test, last_test_time, current_time_str, delta_seconds, stall_suspect)
 
+
 def get_last_run_time(host, basedir, kernel, section, last_test):
-    results_dir = basedir + '/workflows/blktests/results/last-run/'
+    results_dir = basedir + "/workflows/blktests/results/last-run/"
     if not last_test:
         return 0
     if not os.path.isdir(results_dir):
@@ -115,7 +129,7 @@ def get_last_run_time(host, basedir, kernel, section, last_test):
                 break
     if not ok_file:
         return 0
-    f = open(ok_file, 'r')
+    f = open(ok_file, "r")
     for line in f:
         if not "runtime" in line:
             continue
@@ -129,21 +143,28 @@ def get_last_run_time(host, basedir, kernel, section, last_test):
         return float(time_string_elems[0])
     return 0
 
+
 def get_config(dotconfig):
-    config = configparser.ConfigParser(allow_no_value=True, strict=False, interpolation=None)
+    config = configparser.ConfigParser(
+        allow_no_value=True, strict=False, interpolation=None
+    )
     with open(dotconfig) as lines:
         lines = chain(("[top]",), lines)
         config.read_file(lines)
         return config["top"]
     return None
 
+
 def get_section(host, config):
-    hostprefix = config["CONFIG_KDEVOPS_HOSTS_PREFIX"].strip('\"')
+    hostprefix = config["CONFIG_KDEVOPS_HOSTS_PREFIX"].strip('"')
     return host.split(hostprefix + "-")[1].replace("-", "_")
 
+
 def get_hosts(hostfile, hostsection):
     hosts = []
-    config = configparser.ConfigParser(allow_no_value=True, strict=False, interpolation=None)
+    config = configparser.ConfigParser(
+        allow_no_value=True, strict=False, interpolation=None
+    )
     config.read(hostfile)
     if hostsection not in config:
         return hosts
diff --git a/scripts/workflows/lib/crash.py b/scripts/workflows/lib/crash.py
index d9d121d10a24..5397d784b8a2 100755
--- a/scripts/workflows/lib/crash.py
+++ b/scripts/workflows/lib/crash.py
@@ -262,7 +262,7 @@ class KernelCrashWatchdog:
         save_warnings=False,
         context_prefix=0,
         context_postfix=35,
-        ssh_timeout = 180,
+        ssh_timeout=180,
     ):
         self.host_name = host_name
         self.output_dir = os.path.join(output_dir, host_name)
@@ -411,7 +411,7 @@ class KernelCrashWatchdog:
 
                     if not key_log_line:
                         logger.warning(f"Error getting key log line for {file_path}")
-                        continue;
+                        continue
                     # Use the first relevant line for any context
                     log_hash = hashlib.md5(key_log_line.encode()).hexdigest()
                     self.known_crashes.append(log_hash)
@@ -551,9 +551,7 @@ class KernelCrashWatchdog:
                 seconds = float(match.group(1))
                 wall_time = boot_time + timedelta(seconds=seconds)
                 timestamp = wall_time.strftime("%b %d %H:%M:%S")
-                converted_lines.append(
-                    f"{timestamp} {self.host_name} {match.group(2)}"
-                )
+                converted_lines.append(f"{timestamp} {self.host_name} {match.group(2)}")
             else:
                 converted_lines.append(line)
 
@@ -780,7 +778,9 @@ class KernelCrashWatchdog:
             )
             logger.info(f"{self.host_name} is now reachable.")
         except subprocess.TimeoutExpired:
-            logger.error(f"Timeout: SSH connection to {self.host_name} did not succeed within {self.ssh_timeout} seconds. This kernel is probably seriously broken.")
+            logger.error(
+                f"Timeout: SSH connection to {self.host_name} did not succeed within {self.ssh_timeout} seconds. This kernel is probably seriously broken."
+            )
             sys.exit(1)
         except subprocess.CalledProcessError as e:
             logger.warning(f"Failed to wait for SSH on {self.host_name}: {e}")
diff --git a/scripts/workflows/lib/fstests.py b/scripts/workflows/lib/fstests.py
index 328277a934d5..ee3fbe45d724 100644
--- a/scripts/workflows/lib/fstests.py
+++ b/scripts/workflows/lib/fstests.py
@@ -7,71 +7,75 @@ import configparser
 import argparse
 from itertools import chain
 
+
 class FstestsError(Exception):
     pass
 
+
 def fstests_check_pid(host):
     pid = kssh.first_process_name_pid(host, "check")
     if pid <= 0:
         return pid
-    dir = "/proc/" + str(pid)  + "/cwd/tests"
+    dir = "/proc/" + str(pid) + "/cwd/tests"
     if kssh.dir_exists(host, dir):
         return pid
     return 0
 
+
 # Later on we can automate this list with a git grep on the fstests
 # tests/ directory, and we inject this here.
 def fstests_test_uses_soak_duration(testname):
-    USES_SOAK_DURATION = [ "generic/019" ]
-    USES_SOAK_DURATION += [ "generic/388" ]
-    USES_SOAK_DURATION += [ "generic/475" ]
-    USES_SOAK_DURATION += [ "generic/476" ]
-    USES_SOAK_DURATION += [ "generic/521" ]
-    USES_SOAK_DURATION += [ "generic/522" ]
-    USES_SOAK_DURATION += [ "generic/616" ]
-    USES_SOAK_DURATION += [ "generic/617" ]
-    USES_SOAK_DURATION += [ "generic/642" ]
-    USES_SOAK_DURATION += [ "generic/650" ]
-    USES_SOAK_DURATION += [ "generic/648" ]
-    USES_SOAK_DURATION += [ "xfs/285" ]
-    USES_SOAK_DURATION += [ "xfs/517" ]
-    USES_SOAK_DURATION += [ "xfs/560" ]
-    USES_SOAK_DURATION += [ "xfs/561" ]
-    USES_SOAK_DURATION += [ "xfs/562" ]
-    USES_SOAK_DURATION += [ "xfs/565" ]
-    USES_SOAK_DURATION += [ "xfs/570" ]
-    USES_SOAK_DURATION += [ "xfs/571" ]
-    USES_SOAK_DURATION += [ "xfs/572" ]
-    USES_SOAK_DURATION += [ "xfs/573" ]
-    USES_SOAK_DURATION += [ "xfs/574" ]
-    USES_SOAK_DURATION += [ "xfs/575" ]
-    USES_SOAK_DURATION += [ "xfs/576" ]
-    USES_SOAK_DURATION += [ "xfs/577" ]
-    USES_SOAK_DURATION += [ "xfs/578" ]
-    USES_SOAK_DURATION += [ "xfs/579" ]
-    USES_SOAK_DURATION += [ "xfs/580" ]
-    USES_SOAK_DURATION += [ "xfs/581" ]
-    USES_SOAK_DURATION += [ "xfs/582" ]
-    USES_SOAK_DURATION += [ "xfs/583" ]
-    USES_SOAK_DURATION += [ "xfs/584" ]
-    USES_SOAK_DURATION += [ "xfs/585" ]
-    USES_SOAK_DURATION += [ "xfs/586" ]
-    USES_SOAK_DURATION += [ "xfs/587" ]
-    USES_SOAK_DURATION += [ "xfs/588" ]
-    USES_SOAK_DURATION += [ "xfs/589" ]
-    USES_SOAK_DURATION += [ "xfs/590" ]
-    USES_SOAK_DURATION += [ "xfs/591" ]
-    USES_SOAK_DURATION += [ "xfs/592" ]
-    USES_SOAK_DURATION += [ "xfs/593" ]
-    USES_SOAK_DURATION += [ "xfs/594" ]
-    USES_SOAK_DURATION += [ "xfs/595" ]
-    USES_SOAK_DURATION += [ "xfs/727" ]
-    USES_SOAK_DURATION += [ "xfs/729" ]
-    USES_SOAK_DURATION += [ "xfs/800" ]
+    USES_SOAK_DURATION = ["generic/019"]
+    USES_SOAK_DURATION += ["generic/388"]
+    USES_SOAK_DURATION += ["generic/475"]
+    USES_SOAK_DURATION += ["generic/476"]
+    USES_SOAK_DURATION += ["generic/521"]
+    USES_SOAK_DURATION += ["generic/522"]
+    USES_SOAK_DURATION += ["generic/616"]
+    USES_SOAK_DURATION += ["generic/617"]
+    USES_SOAK_DURATION += ["generic/642"]
+    USES_SOAK_DURATION += ["generic/650"]
+    USES_SOAK_DURATION += ["generic/648"]
+    USES_SOAK_DURATION += ["xfs/285"]
+    USES_SOAK_DURATION += ["xfs/517"]
+    USES_SOAK_DURATION += ["xfs/560"]
+    USES_SOAK_DURATION += ["xfs/561"]
+    USES_SOAK_DURATION += ["xfs/562"]
+    USES_SOAK_DURATION += ["xfs/565"]
+    USES_SOAK_DURATION += ["xfs/570"]
+    USES_SOAK_DURATION += ["xfs/571"]
+    USES_SOAK_DURATION += ["xfs/572"]
+    USES_SOAK_DURATION += ["xfs/573"]
+    USES_SOAK_DURATION += ["xfs/574"]
+    USES_SOAK_DURATION += ["xfs/575"]
+    USES_SOAK_DURATION += ["xfs/576"]
+    USES_SOAK_DURATION += ["xfs/577"]
+    USES_SOAK_DURATION += ["xfs/578"]
+    USES_SOAK_DURATION += ["xfs/579"]
+    USES_SOAK_DURATION += ["xfs/580"]
+    USES_SOAK_DURATION += ["xfs/581"]
+    USES_SOAK_DURATION += ["xfs/582"]
+    USES_SOAK_DURATION += ["xfs/583"]
+    USES_SOAK_DURATION += ["xfs/584"]
+    USES_SOAK_DURATION += ["xfs/585"]
+    USES_SOAK_DURATION += ["xfs/586"]
+    USES_SOAK_DURATION += ["xfs/587"]
+    USES_SOAK_DURATION += ["xfs/588"]
+    USES_SOAK_DURATION += ["xfs/589"]
+    USES_SOAK_DURATION += ["xfs/590"]
+    USES_SOAK_DURATION += ["xfs/591"]
+    USES_SOAK_DURATION += ["xfs/592"]
+    USES_SOAK_DURATION += ["xfs/593"]
+    USES_SOAK_DURATION += ["xfs/594"]
+    USES_SOAK_DURATION += ["xfs/595"]
+    USES_SOAK_DURATION += ["xfs/727"]
+    USES_SOAK_DURATION += ["xfs/729"]
+    USES_SOAK_DURATION += ["xfs/800"]
     if testname in USES_SOAK_DURATION:
         return True
     return False
 
+
 def get_fstest_host(use_remote, use_ssh, host, basedir, kernel, section, config):
     stall_suspect = False
     force_ssh = False
@@ -111,7 +115,7 @@ def get_fstest_host(use_remote, use_ssh, host, basedir, kernel, section, config)
     else:
         current_time_str = systemd_remote.get_current_time(host).rstrip()
 
-    fstests_date_str_format = '%Y-%m-%d %H:%M:%S'
+    fstests_date_str_format = "%Y-%m-%d %H:%M:%S"
     d1 = datetime.strptime(last_test_time, fstests_date_str_format)
     d2 = datetime.strptime(current_time_str, fstests_date_str_format)
 
@@ -120,31 +124,37 @@ def get_fstest_host(use_remote, use_ssh, host, basedir, kernel, section, config)
 
     soak_duration_seconds = 0
     if "CONFIG_FSTESTS_SOAK_DURATION" in config:
-        soak_duration_seconds = config["CONFIG_FSTESTS_SOAK_DURATION"].strip('\"')
+        soak_duration_seconds = config["CONFIG_FSTESTS_SOAK_DURATION"].strip('"')
         soak_duration_seconds = int(soak_duration_seconds)
 
     if "CONFIG_FSTESTS_WATCHDOG" not in config:
         enable_watchdog = False
     else:
-        enable_watchdog = config["CONFIG_FSTESTS_WATCHDOG"].strip('\"')
+        enable_watchdog = config["CONFIG_FSTESTS_WATCHDOG"].strip('"')
 
     if enable_watchdog:
-        max_new_test_time = config["CONFIG_FSTESTS_WATCHDOG_MAX_NEW_TEST_TIME"].strip('\"')
+        max_new_test_time = config["CONFIG_FSTESTS_WATCHDOG_MAX_NEW_TEST_TIME"].strip(
+            '"'
+        )
         max_new_test_time = int(max_new_test_time)
         if not max_new_test_time:
             max_new_test_time = 60
 
-        hung_multiplier_long_tests = config["CONFIG_FSTESTS_WATCHDOG_HUNG_MULTIPLIER_LONG_TESTS"].strip('\"')
+        hung_multiplier_long_tests = config[
+            "CONFIG_FSTESTS_WATCHDOG_HUNG_MULTIPLIER_LONG_TESTS"
+        ].strip('"')
         hung_multiplier_long_tests = int(hung_multiplier_long_tests)
         if not hung_multiplier_long_tests:
             hung_multiplier_long_tests = 10
 
-        hung_fast_test_max_time = config["CONFIG_FSTESTS_WATCHDOG_HUNG_FAST_TEST_MAX_TIME"].strip('\"')
+        hung_fast_test_max_time = config[
+            "CONFIG_FSTESTS_WATCHDOG_HUNG_FAST_TEST_MAX_TIME"
+        ].strip('"')
         hung_fast_test_max_time = int(hung_fast_test_max_time)
         if not hung_fast_test_max_time:
             hung_fast_test_max_time = 5
 
-        checktime =  get_checktime(host, basedir, kernel, section, last_test)
+        checktime = get_checktime(host, basedir, kernel, section, last_test)
 
         # If no known prior run time test is known we use a max. This only
         # applies to the first run.
@@ -160,23 +170,37 @@ def get_fstest_host(use_remote, use_ssh, host, basedir, kernel, section, config)
         # If a test typically takes between 1 second to 30 seconds we can likely
         # safely assume the system has crashed after hung_fast_test_max_time
         # minutes
-        elif checktime >  0:
+        elif checktime > 0:
             suspect_crash_time_seconds = 60 * hung_fast_test_max_time
 
         if fstests_test_uses_soak_duration(last_test):
             suspect_crash_time_seconds += soak_duration_seconds
 
-        if delta_seconds >= suspect_crash_time_seconds and 'fstestsstart/000' not in last_test and 'fstestsend/000' not in last_test:
+        if (
+            delta_seconds >= suspect_crash_time_seconds
+            and "fstestsstart/000" not in last_test
+            and "fstestsend/000" not in last_test
+        ):
             stall_suspect = True
 
     return (last_test, last_test_time, current_time_str, delta_seconds, stall_suspect)
 
+
 def get_checktime(host, basedir, kernel, section, last_test):
-    checktime_dir = basedir + '/workflows/fstests/results/' + host + '/' + kernel + '/' + section + '/'
-    checktime_file = checktime_dir + 'check.time'
+    checktime_dir = (
+        basedir
+        + "/workflows/fstests/results/"
+        + host
+        + "/"
+        + kernel
+        + "/"
+        + section
+        + "/"
+    )
+    checktime_file = checktime_dir + "check.time"
     if not os.path.isfile(checktime_file):
         return 0
-    cp = open(checktime_file, 'r')
+    cp = open(checktime_file, "r")
     for line in cp:
         elems = line.rstrip().split(" ")
         this_test = elems[0].rstrip().replace(" ", "")
@@ -184,21 +208,28 @@ def get_checktime(host, basedir, kernel, section, last_test):
             return int(elems[1])
     return 0
 
+
 def get_config(dotconfig):
-    config = configparser.ConfigParser(allow_no_value=True, strict=False, interpolation=None)
+    config = configparser.ConfigParser(
+        allow_no_value=True, strict=False, interpolation=None
+    )
     with open(dotconfig) as lines:
         lines = chain(("[top]",), lines)
         config.read_file(lines)
         return config["top"]
     return None
 
+
 def get_section(host, config):
-    hostprefix = config["CONFIG_KDEVOPS_HOSTS_PREFIX"].strip('\"')
+    hostprefix = config["CONFIG_KDEVOPS_HOSTS_PREFIX"].strip('"')
     return host.split(hostprefix + "-")[1].replace("-", "_")
 
+
 def get_hosts(hostfile, hostsection):
     hosts = []
-    config = configparser.ConfigParser(allow_no_value=True, strict=False, interpolation=None)
+    config = configparser.ConfigParser(
+        allow_no_value=True, strict=False, interpolation=None
+    )
     config.read(hostfile)
     if hostsection not in config:
         return hosts
diff --git a/scripts/workflows/lib/kssh.py b/scripts/workflows/lib/kssh.py
index 11472767b07a..a9b4b29d81f8 100644
--- a/scripts/workflows/lib/kssh.py
+++ b/scripts/workflows/lib/kssh.py
@@ -2,28 +2,36 @@
 
 import subprocess, os
 
+
 class KsshError(Exception):
     pass
+
+
 class ExecutionError(KsshError):
     def __init__(self, errcode):
         self.error_code = errcode
+
+
 class TimeoutExpired(KsshError):
     def __init__(self, errcode):
         self.error_code = errcode
         return "timeout"
 
+
 def _check(process):
     if process.returncode != 0:
         raise ExecutionError(process.returncode)
 
+
 def dir_exists(host, dirname):
-    cmd = ['ssh', host,
-           'sudo',
-           'ls', '-ld',
-           dirname ]
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
-                               close_fds=True, universal_newlines=True)
+    cmd = ["ssh", host, "sudo", "ls", "-ld", dirname]
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
@@ -37,17 +45,35 @@ def dir_exists(host, dirname):
         else:
             return False
 
+
 def first_process_name_pid(host, process_name):
-    cmd = ['ssh', host,
-           'sudo',
-           'ps', '-ef',
-           '|', 'grep', '-v', 'grep',
-           '|', 'grep', process_name,
-           '|', 'awk', '\'{print $2}\'',
-           '|', 'tail', '-1' ]
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
-                               close_fds=True, universal_newlines=True)
+    cmd = [
+        "ssh",
+        host,
+        "sudo",
+        "ps",
+        "-ef",
+        "|",
+        "grep",
+        "-v",
+        "grep",
+        "|",
+        "grep",
+        process_name,
+        "|",
+        "awk",
+        "'{print $2}'",
+        "|",
+        "tail",
+        "-1",
+    ]
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
@@ -62,14 +88,16 @@ def first_process_name_pid(host, process_name):
             return 0
         return int(stdout)
 
+
 def prog_exists(host, prog):
-    cmd = ['ssh', host,
-           'sudo',
-           'which',
-           prog ]
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
-                               close_fds=True, universal_newlines=True)
+    cmd = ["ssh", host, "sudo", "which", prog]
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
@@ -82,11 +110,16 @@ def prog_exists(host, prog):
             return False
         return True
 
+
 def get_uname(host):
-    cmd = ['ssh', host, 'uname', '-r' ]
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
-                               close_fds=True, universal_newlines=True)
+    cmd = ["ssh", host, "uname", "-r"]
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
@@ -99,28 +132,49 @@ def get_uname(host):
             return "Uname-issue"
         return stdout
 
+
 def get_test(host, suite):
-    if suite not in [ 'fstests', 'blktests']:
+    if suite not in ["fstests", "blktests"]:
         return None
     run_string = "run " + suite
-    cmd = ['ssh', host,
-           'sudo',
-           'dmesg',
-           '|', 'grep', '"' + run_string + '"',
-           '|', 'awk', '-F"' + run_string + ' "', '\'{print $2}\'',
-           '|', 'tail', '-1' ]
-    if prog_exists(host, 'journalctl'):
-        cmd = ['ssh', host,
-               'sudo',
-               'journalctl',
-               '-k',
-               '-g'
-               '"' + run_string + '"'
-               '|', 'awk', '-F"' + run_string + ' "', '\'{print $2}\'',
-               '|', 'tail', '-1' ]
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
-                               close_fds=True, universal_newlines=True)
+    cmd = [
+        "ssh",
+        host,
+        "sudo",
+        "dmesg",
+        "|",
+        "grep",
+        '"' + run_string + '"',
+        "|",
+        "awk",
+        '-F"' + run_string + ' "',
+        "'{print $2}'",
+        "|",
+        "tail",
+        "-1",
+    ]
+    if prog_exists(host, "journalctl"):
+        cmd = [
+            "ssh",
+            host,
+            "sudo",
+            "journalctl",
+            "-k",
+            "-g" '"' + run_string + '"' "|",
+            "awk",
+            '-F"' + run_string + ' "',
+            "'{print $2}'",
+            "|",
+            "tail",
+            "-1",
+        ]
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
@@ -139,19 +193,33 @@ def get_test(host, suite):
 
         return stdout
 
+
 def get_last_fstest(host):
-    return get_test(host, 'fstests')
+    return get_test(host, "fstests")
+
 
 def get_last_blktest(host):
-    return get_test(host, 'blktests')
+    return get_test(host, "blktests")
+
 
 def get_current_time(host):
-    cmd = ['ssh', host,
-           'date', '--rfc-3339=\'seconds\'',
-           '|', 'awk', '-F"+"', '\'{print $1}\'' ]
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
-                               close_fds=True, universal_newlines=True)
+    cmd = [
+        "ssh",
+        host,
+        "date",
+        "--rfc-3339='seconds'",
+        "|",
+        "awk",
+        '-F"+"',
+        "'{print $1}'",
+    ]
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
diff --git a/scripts/workflows/lib/systemd_remote.py b/scripts/workflows/lib/systemd_remote.py
index c95ca7e57bd5..1a65de9ffb6b 100644
--- a/scripts/workflows/lib/systemd_remote.py
+++ b/scripts/workflows/lib/systemd_remote.py
@@ -3,19 +3,27 @@
 import subprocess, os, sys
 from datetime import datetime
 
+
 class SystemdError(Exception):
     pass
+
+
 class ExecutionError(SystemdError):
     def __init__(self, errcode):
         self.error_code = errcode
+
+
 class TimeoutExpired(SystemdError):
     def __init__(self, errcode):
         self.error_code = errcode
         return "timeout"
 
+
 def get_host_ip(host):
     try:
-        result = subprocess.run(["ssh", "-G", host], capture_output=True, text=True, check=True)
+        result = subprocess.run(
+            ["ssh", "-G", host], capture_output=True, text=True, check=True
+        )
         for line in result.stdout.splitlines():
             if line.startswith("hostname "):
                 return line.split()[1]
@@ -23,15 +31,17 @@ def get_host_ip(host):
         logger.warning(f"Failed to resolve IP for {host}: {e}")
     return None
 
+
 def get_current_time(host):
-    format = '%Y-%m-%d %H:%M:%S'
+    format = "%Y-%m-%d %H:%M:%S"
     today = datetime.today()
     today_str = today.strftime(format)
     return today_str
 
+
 def get_extra_journals(remote_path, host):
     ip = get_host_ip(host)
-    extra_journals_path = "remote-" + ip + '@'
+    extra_journals_path = "remote-" + ip + "@"
     extra_journals = []
     for file in os.listdir(remote_path):
         if extra_journals_path in file:
@@ -39,37 +49,33 @@ def get_extra_journals(remote_path, host):
             extra_journals.append(remote_path + file)
     return extra_journals
 
+
 def get_uname(remote_path, host, configured_kernel):
     ip = get_host_ip(host)
     extra_journals = get_extra_journals(remote_path, host)
-    fpath = remote_path + "remote-" + ip + '.journal'
+    fpath = remote_path + "remote-" + ip + ".journal"
     grep = "Linux version"
-    grep_str = "\"Linux version\""
-    cmd = [
-           'journalctl',
-           '--no-pager',
-           '-n 1',
-           '-k',
-           '-g',
-           grep,
-           '--file',
-           fpath ]
+    grep_str = '"Linux version"'
+    cmd = ["journalctl", "--no-pager", "-n 1", "-k", "-g", grep, "--file", fpath]
     cmd = cmd + extra_journals
     cmd_verbose = [
-           'journalctl',
-           '--no-pager',
-           '-n 1',
-           '-k',
-           '-g',
-           grep_str,
-           '--file',
-           fpath ]
+        "journalctl",
+        "--no-pager",
+        "-n 1",
+        "-k",
+        "-g",
+        grep_str,
+        "--file",
+        fpath,
+    ]
     cmd_verbose = cmd_verbose + extra_journals
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE,
-                               stderr=subprocess.STDOUT,
-                               close_fds=True,
-                               universal_newlines=True)
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
@@ -88,42 +94,41 @@ def get_uname(remote_path, host, configured_kernel):
             sys.stderr.write("\nCommand used:\n%s\n\n" % " ".join(cmd_verbose))
             return None
         if len(last_line.split(grep)) <= 1:
-            sys.stderr.write("\nThe string %s could not be used to split the line." % grep_str)
+            sys.stderr.write(
+                "\nThe string %s could not be used to split the line." % grep_str
+            )
             sys.stderr.write("\nCommand used:\n%s\n\n" % " ".join(cmd_verbose))
             return None
         kernel_line = last_line.split(grep)[1].strip()
         if len(last_line.split()) <= 1:
-            sys.stderr.write("\nThe string %s was used but could not find kernel version." % grep_str)
+            sys.stderr.write(
+                "\nThe string %s was used but could not find kernel version." % grep_str
+            )
             sys.stderr.write("\nCommand used:\n%s\n\n" % " ".join(cmd_verbose))
             return None
         kernel = kernel_line.split()[0].strip()
 
         return kernel
 
+
 # Returns something like "xfs/040 at 2023-12-17 23:52:14"
 def get_test(remote_path, host, suite):
     ip = get_host_ip(host)
-    if suite not in [ 'fstests', 'blktests']:
+    if suite not in ["fstests", "blktests"]:
         return None
     # Example: /var/log/journal/remote/remote-line-xfs-reflink.journal
-    fpath = remote_path + "remote-" + ip + '.journal'
+    fpath = remote_path + "remote-" + ip + ".journal"
     extra_journals = get_extra_journals(remote_path, host)
     run_string = "run " + suite
-    cmd = [
-           'journalctl',
-           '--no-pager',
-           '-n 1',
-           '-k',
-           '-g',
-           run_string,
-           '--file',
-           fpath ]
+    cmd = ["journalctl", "--no-pager", "-n 1", "-k", "-g", run_string, "--file", fpath]
     cmd = cmd + extra_journals
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE,
-                               stderr=subprocess.STDOUT,
-                               close_fds=True,
-                               universal_newlines=True)
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
@@ -142,8 +147,10 @@ def get_test(remote_path, host, suite):
 
         return test_line
 
+
 def get_last_fstest(remote_path, host):
-    return get_test(remote_path, host, 'fstests')
+    return get_test(remote_path, host, "fstests")
+
 
 def get_last_blktest(remote_path, host):
-    return get_test(remote_path, host, 'blktests')
+    return get_test(remote_path, host, "blktests")
diff --git a/scripts/workflows/pynfs/check_pynfs_results.py b/scripts/workflows/pynfs/check_pynfs_results.py
index 724d2ace3950..753d177f734a 100755
--- a/scripts/workflows/pynfs/check_pynfs_results.py
+++ b/scripts/workflows/pynfs/check_pynfs_results.py
@@ -12,20 +12,21 @@ import json
 import sys
 import pprint
 
+
 def main():
     base = json.load(open(sys.argv[1]))
     result = json.load(open(sys.argv[2]))
 
     failures = {}
 
-    for case in result['testcase']:
-        if 'failure' in case:
-            failures[case['code']] = case
+    for case in result["testcase"]:
+        if "failure" in case:
+            failures[case["code"]] = case
 
-    for case in base['testcase']:
-        if 'failure' in case:
-            if case['code'] in failures:
-                del failures[case['code']]
+    for case in base["testcase"]:
+        if "failure" in case:
+            if case["code"] in failures:
+                del failures[case["code"]]
 
     if len(failures) != 0:
         pprint.pprint(failures)
@@ -33,6 +34,6 @@ def main():
     else:
         sys.exit(0)
 
+
 if __name__ == "__main__":
     main()
-
-- 
2.47.2


  parent reply	other threads:[~2025-08-01 19:46 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-01 19:46 [PATCH v3 00/11] kdevops: add support for A/B testing Luis Chamberlain
2025-08-01 19:46 ` [PATCH v3 01/11] roles/guestfs: add missing bootlinux_9p: False Luis Chamberlain
2025-08-01 19:46 ` [PATCH v3 02/11] Makefile: suppress Ansible warnings during configuration generation Luis Chamberlain
2025-08-01 19:46 ` [PATCH v3 03/11] playbooks: few space cleanups Luis Chamberlain
2025-08-01 19:46 ` [PATCH v3 04/11] style: add extensive code formatting checks to make style Luis Chamberlain
2025-08-01 19:46 ` [PATCH v3 05/11] Makefile: move styling to scripts/style.Makefile Luis Chamberlain
2025-08-01 19:46 ` [PATCH v3 06/11] CLAUDE.md: add instrucitons to verify commit Luis Chamberlain
2025-08-01 19:46 ` Luis Chamberlain [this message]
2025-08-01 19:46 ` [PATCH v3 08/11] scripts: enhance hop count detection to support DEB822 format Luis Chamberlain
2025-08-01 19:46 ` [PATCH v3 09/11] devconfig: add automatic APT mirror fallback with DEB822 modernization Luis Chamberlain
2025-08-01 19:46 ` [PATCH v3 10/11] devconfig: enhance hop1 detection to support traditional sources.list Luis Chamberlain
2025-08-01 19:46 ` [PATCH v3 11/11] bootlinux: add support for A/B kernel testing Luis Chamberlain
2025-08-02 17:15 ` [PATCH v3 00/11] kdevops: add support for A/B testing Luis Chamberlain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250801194635.1598544-8-mcgrof@kernel.org \
    --to=mcgrof@kernel.org \
    --cc=cel@kernel.org \
    --cc=da.gomez@kruces.com \
    --cc=da.gomez@samsung.com \
    --cc=kdevops@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox