public inbox for kdevops@lists.linux.dev
 help / color / mirror / Atom feed
* [PATCH v2 0/9] kdevops: add support for A/B testing
@ 2025-07-30  6:01 Luis Chamberlain
  2025-07-30  6:01 ` [PATCH v2 1/9] roles/guestfs: add missing bootlinux_9p: False Luis Chamberlain
                   ` (8 more replies)
  0 siblings, 9 replies; 19+ messages in thread
From: Luis Chamberlain @ 2025-07-30  6:01 UTC (permalink / raw)
  To: Chuck Lever, Daniel Gomez, kdevops; +Cc: Luis Chamberlain

This v2 is rebased on top of the latest changes, and extends support to
be more careful. It also now adds a check to ensure we don't regress
this.

A train of sytling fixes goes with this patchset as we adopt more AI
code, as its the only way to keep them on check on styling. Turns out
its an art on it own.

Luis Chamberlain (9):
  roles/guestfs: add missing bootlinux_9p: False
  Makefile: suppress Ansible warnings during configuration generation
  playbooks: few space cleanups
  style: add extensive code formatting checks to make style
  Makefile: move styling to scripts/style.Makefile
  CLAUDE.md: add instrucitons to verify commit
  all: run black
  devconfig: add automatic APT mirror fallback for Debian testing
  bootlinux: add support for A/B kernel testing

 .github/workflows/linux-ab-testing.yml        | 217 +++++++++
 .github/workflows/linux-ab.yml                |  47 ++
 CLAUDE.md                                     |  10 +
 Makefile                                      |  18 +-
 PROMPTS.md                                    |  52 +++
 defconfigs/linux-ab-testing                   |  14 +
 defconfigs/linux-ab-testing-9p                |  15 +
 defconfigs/linux-ab-testing-builder           |  15 +
 defconfigs/linux-ab-testing-target            |  15 +
 docs/kdevops-make-linux.md                    | 158 +++++++
 playbooks/mmtests.yml                         |   2 +-
 .../blktests/augment_expunge_list.py          |  95 ++--
 .../workflows/blktests/gen-expunge-args.py    |  46 +-
 .../workflows/blktests/gen-results-dir.py     |  39 +-
 .../blktests/get_new_expunge_files.py         |  16 +-
 .../dynamic-kconfig/gen-dynamic-pci.py        |  89 ++--
 .../workflows/fstests/augment_expunge_list.py | 163 +++++--
 .../workflows/fstests/bad_files_summary.py    |  46 +-
 .../fstests/fstests-checktime-distribution.py |  46 +-
 .../workflows/fstests/gen_results_summary.py  | 132 +++---
 .../fstests/get_new_expunge_files.py          |  24 +-
 playbooks/python/workflows/fstests/lib/git.py |  21 +-
 .../workflows/fstests/xunit_merge_all.py      |  33 +-
 .../sysbench/sysbench-tps-compare.py          |  84 +++-
 .../workflows/sysbench/sysbench-tps-plot.py   |  36 +-
 .../sysbench/sysbench-tps-variance.py         | 435 +++++++++++++-----
 playbooks/roles/bootlinux/defaults/main.yml   |  14 +
 playbooks/roles/bootlinux/tasks/build/9p.yml  |  20 +-
 .../install-minimal-deps/debian/main.yml      |   2 +-
 .../tasks/install-minimal-deps/main.yml       |   2 +-
 .../install-minimal-deps/redhat/main.yml      |   2 +-
 .../tasks/install-minimal-deps/suse/main.yml  |   2 +-
 playbooks/roles/bootlinux/tasks/main.yml      | 112 +++++
 .../devconfig/tasks/check-apt-mirrors.yml     |  63 +++
 playbooks/roles/devconfig/tasks/main.yml      |   8 +
 .../debian-testing-fallback-sources.list      |  10 +
 .../gen_pcie_passthrough_guestfs_xml.py       |  49 +-
 playbooks/roles/guestfs/defaults/main.yml     |   1 +
 .../linux-mirror/python/gen-mirror-files.py   | 131 +++---
 .../linux-mirror/python/start-mirroring.py    | 116 +++--
 .../roles/mmtests/tasks/install-deps/main.yml |   2 +-
 scripts/check_commit_format.py                |  28 +-
 .../generation/check_for_atomic_calls.py      |  71 +--
 .../generation/check_for_sleepy_calls.py      | 202 +++++---
 scripts/detect_indentation_issues.py          | 163 +++++++
 scripts/detect_whitespace_issues.py           |  38 +-
 scripts/ensure_newlines.py                    |  75 +++
 scripts/fix_indentation_issues.py             | 152 ++++++
 scripts/fix_whitespace_issues.py              |  44 +-
 scripts/generate_refs.py                      |   6 +-
 scripts/honey-badger.py                       | 103 +++--
 scripts/infer_last_stable_kernel.sh           |  35 ++
 scripts/linux-ab-testing.Makefile             |  51 ++
 scripts/spdxcheck.py                          | 201 ++++----
 scripts/style.Makefile                        |  12 +
 scripts/test-linux-ab-config.py               | 182 ++++++++
 scripts/test-linux-ab.sh                      | 213 +++++++++
 scripts/update_ssh_config_guestfs.py          |  49 +-
 .../workflows/blktests/blktests_watchdog.py   |  75 ++-
 scripts/workflows/cxl/gen_qemu_cxl.py         | 235 +++++++---
 scripts/workflows/fstests/fstests_watchdog.py |  99 ++--
 scripts/workflows/generic/crash_report.py     |   4 +-
 scripts/workflows/generic/crash_watchdog.py   |  78 +++-
 scripts/workflows/lib/blktests.py             |  47 +-
 scripts/workflows/lib/crash.py                |  12 +-
 scripts/workflows/lib/fstests.py              | 155 ++++---
 scripts/workflows/lib/kssh.py                 | 178 ++++---
 scripts/workflows/lib/systemd_remote.py       | 101 ++--
 .../workflows/pynfs/check_pynfs_results.py    |  17 +-
 workflows/linux/Kconfig                       | 102 +++-
 workflows/linux/Makefile                      |  39 ++
 71 files changed, 4049 insertions(+), 1120 deletions(-)
 create mode 100644 .github/workflows/linux-ab-testing.yml
 create mode 100644 .github/workflows/linux-ab.yml
 create mode 100644 defconfigs/linux-ab-testing
 create mode 100644 defconfigs/linux-ab-testing-9p
 create mode 100644 defconfigs/linux-ab-testing-builder
 create mode 100644 defconfigs/linux-ab-testing-target
 create mode 100644 playbooks/roles/devconfig/tasks/check-apt-mirrors.yml
 create mode 100644 playbooks/roles/devconfig/templates/debian-testing-fallback-sources.list
 create mode 100755 scripts/detect_indentation_issues.py
 create mode 100755 scripts/ensure_newlines.py
 create mode 100755 scripts/fix_indentation_issues.py
 create mode 100755 scripts/infer_last_stable_kernel.sh
 create mode 100644 scripts/linux-ab-testing.Makefile
 create mode 100644 scripts/style.Makefile
 create mode 100755 scripts/test-linux-ab-config.py
 create mode 100755 scripts/test-linux-ab.sh

-- 
2.47.2


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 1/9] roles/guestfs: add missing bootlinux_9p: False
  2025-07-30  6:01 [PATCH v2 0/9] kdevops: add support for A/B testing Luis Chamberlain
@ 2025-07-30  6:01 ` Luis Chamberlain
  2025-07-30 14:17   ` Chuck Lever
  2025-07-30  6:01 ` [PATCH v2 2/9] Makefile: suppress Ansible warnings during configuration generation Luis Chamberlain
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: Luis Chamberlain @ 2025-07-30  6:01 UTC (permalink / raw)
  To: Chuck Lever, Daniel Gomez, kdevops; +Cc: Luis Chamberlain

Commit 722d83a4871c41 ("bootlinux: Move 9p build tasks to a subrole")
added a proactive directory creation as we no longer git clone right
away, but forgot to ensure the variable is defined by default. When
we don't enable building linux this variable is not defined. Fix this.

Fixes: 722d83a4871c41 ("bootlinux: Move 9p build tasks to a subrole")
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 playbooks/roles/guestfs/defaults/main.yml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/playbooks/roles/guestfs/defaults/main.yml b/playbooks/roles/guestfs/defaults/main.yml
index eec137bd..76854d06 100644
--- a/playbooks/roles/guestfs/defaults/main.yml
+++ b/playbooks/roles/guestfs/defaults/main.yml
@@ -3,3 +3,4 @@ distro_debian_based: false
 
 libvirt_uri_system: false
 libvirt_enable_largeio: false
+bootlinux_9p: False
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 2/9] Makefile: suppress Ansible warnings during configuration generation
  2025-07-30  6:01 [PATCH v2 0/9] kdevops: add support for A/B testing Luis Chamberlain
  2025-07-30  6:01 ` [PATCH v2 1/9] roles/guestfs: add missing bootlinux_9p: False Luis Chamberlain
@ 2025-07-30  6:01 ` Luis Chamberlain
  2025-07-30  6:22   ` Daniel Gomez
  2025-07-30  6:01 ` [PATCH v2 3/9] playbooks: few space cleanups Luis Chamberlain
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: Luis Chamberlain @ 2025-07-30  6:01 UTC (permalink / raw)
  To: Chuck Lever, Daniel Gomez, kdevops; +Cc: Luis Chamberlain

The initial configuration generation playbooks (ansible_cfg.yml,
gen_hosts.yml, and gen_nodes.yml) run without a proper inventory file
by design, as they are responsible for creating the configuration files
that will be used by subsequent playbooks.

This causes Ansible to emit warnings about:
- "No inventory was parsed, only implicit localhost is available"
- "provided hosts list is empty, only localhost is available"

These warnings are harmless but create noise in the build output,
potentially confusing users who might think something is wrong.

Add ANSIBLE_LOCALHOST_WARNING=False and ANSIBLE_INVENTORY_UNPARSED_WARNING=False
environment variables to these three ansible-playbook invocations to
suppress these specific warnings. This makes the build output cleaner
while maintaining the same functionality.

The warnings were appearing because:
1. ansible_cfg.yml creates the ansible.cfg file
2. gen_hosts.yml creates the hosts inventory file
3. gen_nodes.yml creates the kdevops_nodes.yaml file

All three intentionally run with "--connection=local" and minimal
inventory since they're bootstrapping the configuration that other
playbooks will use.

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 Makefile | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/Makefile b/Makefile
index 37c2522b..31f544e9 100644
--- a/Makefile
+++ b/Makefile
@@ -195,7 +195,8 @@ include scripts/gen-nodes.Makefile
 	false)
 
 $(ANSIBLE_CFG_FILE): .config
-	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) --connection=local \
+	$(Q)ANSIBLE_LOCALHOST_WARNING=False ANSIBLE_INVENTORY_UNPARSED_WARNING=False \
+		ansible-playbook $(ANSIBLE_VERBOSE) --connection=local \
 		--inventory localhost, \
 		$(KDEVOPS_PLAYBOOKS_DIR)/ansible_cfg.yml \
 		--extra-vars=@./.extra_vars_auto.yaml
@@ -226,13 +227,15 @@ endif
 
 DEFAULT_DEPS += $(ANSIBLE_INVENTORY_FILE)
 $(ANSIBLE_INVENTORY_FILE): .config $(ANSIBLE_CFG_FILE) $(KDEVOPS_HOSTS_TEMPLATE)
-	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+	$(Q)ANSIBLE_LOCALHOST_WARNING=False ANSIBLE_INVENTORY_UNPARSED_WARNING=False \
+		ansible-playbook $(ANSIBLE_VERBOSE) \
 		$(KDEVOPS_PLAYBOOKS_DIR)/gen_hosts.yml \
 		--extra-vars=@./extra_vars.yaml
 
 DEFAULT_DEPS += $(KDEVOPS_NODES)
 $(KDEVOPS_NODES): .config $(ANSIBLE_CFG_FILE) $(KDEVOPS_NODES_TEMPLATE)
-	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) --connection=local \
+	$(Q)ANSIBLE_LOCALHOST_WARNING=False ANSIBLE_INVENTORY_UNPARSED_WARNING=False \
+		ansible-playbook $(ANSIBLE_VERBOSE) --connection=local \
 		--inventory localhost, \
 		$(KDEVOPS_PLAYBOOKS_DIR)/gen_nodes.yml \
 		--extra-vars=@./extra_vars.yaml
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 3/9] playbooks: few space cleanups
  2025-07-30  6:01 [PATCH v2 0/9] kdevops: add support for A/B testing Luis Chamberlain
  2025-07-30  6:01 ` [PATCH v2 1/9] roles/guestfs: add missing bootlinux_9p: False Luis Chamberlain
  2025-07-30  6:01 ` [PATCH v2 2/9] Makefile: suppress Ansible warnings during configuration generation Luis Chamberlain
@ 2025-07-30  6:01 ` Luis Chamberlain
  2025-07-30  6:01 ` [PATCH v2 4/9] style: add extensive code formatting checks to make style Luis Chamberlain
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Luis Chamberlain @ 2025-07-30  6:01 UTC (permalink / raw)
  To: Chuck Lever, Daniel Gomez, kdevops; +Cc: Luis Chamberlain

Since we're going to be using bots we sometimes forget to clean up after them.
Fix a few minor spacing issues left behind from prior commits.

Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 Makefile                                                        | 2 +-
 playbooks/mmtests.yml                                           | 2 +-
 .../roles/bootlinux/tasks/install-minimal-deps/debian/main.yml  | 2 +-
 playbooks/roles/bootlinux/tasks/install-minimal-deps/main.yml   | 2 +-
 .../roles/bootlinux/tasks/install-minimal-deps/redhat/main.yml  | 2 +-
 .../roles/bootlinux/tasks/install-minimal-deps/suse/main.yml    | 2 +-
 playbooks/roles/mmtests/tasks/install-deps/main.yml             | 2 +-
 7 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/Makefile b/Makefile
index 31f544e9..a8b58eb7 100644
--- a/Makefile
+++ b/Makefile
@@ -122,7 +122,7 @@ include scripts/krb5.Makefile
 include scripts/devconfig.Makefile
 include scripts/ssh.Makefile
 
-ANSIBLE_CMD_KOTD_ENABLE := echo KOTD disabled so not running: 
+ANSIBLE_CMD_KOTD_ENABLE := echo KOTD disabled so not running:
 ifeq (y,$(CONFIG_WORKFLOW_KOTD_ENABLE))
 include scripts/kotd.Makefile
 endif # WORKFLOW_KOTD_ENABLE
diff --git a/playbooks/mmtests.yml b/playbooks/mmtests.yml
index d341d246..f66e65db 100644
--- a/playbooks/mmtests.yml
+++ b/playbooks/mmtests.yml
@@ -1,4 +1,4 @@
 ---
 - hosts: all
   roles:
-    - role: mmtests
\ No newline at end of file
+    - role: mmtests
diff --git a/playbooks/roles/bootlinux/tasks/install-minimal-deps/debian/main.yml b/playbooks/roles/bootlinux/tasks/install-minimal-deps/debian/main.yml
index 9fd6834f..21d33331 100644
--- a/playbooks/roles/bootlinux/tasks/install-minimal-deps/debian/main.yml
+++ b/playbooks/roles/bootlinux/tasks/install-minimal-deps/debian/main.yml
@@ -16,4 +16,4 @@
       - make
       - gcc
       - kmod
-    state: present
\ No newline at end of file
+    state: present
diff --git a/playbooks/roles/bootlinux/tasks/install-minimal-deps/main.yml b/playbooks/roles/bootlinux/tasks/install-minimal-deps/main.yml
index d1e9f675..75dbd9a4 100644
--- a/playbooks/roles/bootlinux/tasks/install-minimal-deps/main.yml
+++ b/playbooks/roles/bootlinux/tasks/install-minimal-deps/main.yml
@@ -12,4 +12,4 @@
 - name: Red Hat-specific minimal setup
   ansible.builtin.import_tasks: redhat/main.yml
   when:
-    - ansible_os_family == "RedHat"
\ No newline at end of file
+    - ansible_os_family == "RedHat"
diff --git a/playbooks/roles/bootlinux/tasks/install-minimal-deps/redhat/main.yml b/playbooks/roles/bootlinux/tasks/install-minimal-deps/redhat/main.yml
index c5cd6fbf..e077f5e8 100644
--- a/playbooks/roles/bootlinux/tasks/install-minimal-deps/redhat/main.yml
+++ b/playbooks/roles/bootlinux/tasks/install-minimal-deps/redhat/main.yml
@@ -24,4 +24,4 @@
       - kmod
     state: present
   when:
-    - ansible_facts['distribution_major_version']|int >= 8
\ No newline at end of file
+    - ansible_facts['distribution_major_version']|int >= 8
diff --git a/playbooks/roles/bootlinux/tasks/install-minimal-deps/suse/main.yml b/playbooks/roles/bootlinux/tasks/install-minimal-deps/suse/main.yml
index a17768f9..ce73ddd1 100644
--- a/playbooks/roles/bootlinux/tasks/install-minimal-deps/suse/main.yml
+++ b/playbooks/roles/bootlinux/tasks/install-minimal-deps/suse/main.yml
@@ -10,4 +10,4 @@
       - make
       - gcc
       - kmod-compat
-    state: present
\ No newline at end of file
+    state: present
diff --git a/playbooks/roles/mmtests/tasks/install-deps/main.yml b/playbooks/roles/mmtests/tasks/install-deps/main.yml
index d3972fae..3ecd6982 100644
--- a/playbooks/roles/mmtests/tasks/install-deps/main.yml
+++ b/playbooks/roles/mmtests/tasks/install-deps/main.yml
@@ -13,4 +13,4 @@
 
 - name: RedHat distribution specific setup
   import_tasks: tasks/install-deps/redhat/main.yml
-  when: ansible_facts['os_family']|lower == 'redhat'
\ No newline at end of file
+  when: ansible_facts['os_family']|lower == 'redhat'
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 4/9] style: add extensive code formatting checks to make style
  2025-07-30  6:01 [PATCH v2 0/9] kdevops: add support for A/B testing Luis Chamberlain
                   ` (2 preceding siblings ...)
  2025-07-30  6:01 ` [PATCH v2 3/9] playbooks: few space cleanups Luis Chamberlain
@ 2025-07-30  6:01 ` Luis Chamberlain
  2025-07-30  6:01 ` [PATCH v2 5/9] Makefile: move styling to scripts/style.Makefile Luis Chamberlain
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Luis Chamberlain @ 2025-07-30  6:01 UTC (permalink / raw)
  To: Chuck Lever, Daniel Gomez, kdevops; +Cc: Luis Chamberlain

Extend the 'make style' target with comprehensive code formatting and
style checks:

1. Add black formatter for Python files - runs 'black .' to check and
   format all Python files in the repository

2. Add indentation detection for mixed tabs/spaces:
   - detect_indentation_issues.py: Detects incorrect indentation based
     on file type (YAML requires spaces, Makefiles need tabs for
     recipes, Python prefers spaces)
   - fix_indentation_issues.py: Automatically fixes indentation issues

3. Keep existing checks:
   - Whitespace issue detection (trailing spaces, missing newlines)
   - Commit message format validation
   - Ensure all text files end with newlines

The style checks are ordered to run black first as it provides the
most comprehensive Python formatting validation. Indentation and
whitespace checks default to checking only modified files to avoid
being overwhelmed by pre-existing issues.

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 Makefile                             |  11 +-
 scripts/detect_indentation_issues.py | 163 +++++++++++++++++++++++++++
 scripts/ensure_newlines.py           |  75 ++++++++++++
 scripts/fix_indentation_issues.py    | 152 +++++++++++++++++++++++++
 4 files changed, 399 insertions(+), 2 deletions(-)
 create mode 100755 scripts/detect_indentation_issues.py
 create mode 100755 scripts/ensure_newlines.py
 create mode 100755 scripts/fix_indentation_issues.py

diff --git a/Makefile b/Makefile
index a8b58eb7..2c72042d 100644
--- a/Makefile
+++ b/Makefile
@@ -247,10 +247,17 @@ include scripts/ci.Makefile
 include scripts/archive.Makefile
 include scripts/defconfig.Makefile
 
+PHONY += fix-whitespace-last-commit
+fix-whitespace-last-commit:
+	$(Q)git diff --name-only --diff-filter=M HEAD~1..HEAD | xargs -r python3 scripts/fix_whitespace_issues.py
+
 PHONY += style
 style:
-	$(Q)python3 scripts/detect_whitespace_issues.py
-	$(Q)python3 scripts/check_commit_format.py
+	$(Q)if which black > /dev/null ; then black . || true; fi
+	$(Q)python3 scripts/detect_whitespace_issues.py || true
+	$(Q)python3 scripts/detect_indentation_issues.py || true
+	$(Q)python3 scripts/check_commit_format.py || true
+	$(Q)python3 scripts/ensure_newlines.py || true
 
 PHONY += clean
 clean:
diff --git a/scripts/detect_indentation_issues.py b/scripts/detect_indentation_issues.py
new file mode 100755
index 00000000..98783b4c
--- /dev/null
+++ b/scripts/detect_indentation_issues.py
@@ -0,0 +1,163 @@
+#!/usr/bin/env python3
+"""
+Detect indentation issues in files - mixed tabs/spaces or incorrect indentation
+"""
+
+import os
+import sys
+from pathlib import Path
+
+
+def check_file_indentation(file_path):
+    """Check a single file for indentation issues"""
+    issues = []
+
+    try:
+        with open(file_path, "rb") as f:
+            content = f.read()
+
+        # Skip binary files
+        if b"\0" in content:
+            return issues
+
+        lines = content.decode("utf-8", errors="ignore").splitlines()
+
+        # Determine expected indentation style
+        uses_tabs = False
+        uses_spaces = False
+
+        for line in lines[:100]:  # Check first 100 lines to determine style
+            if line.startswith("\t"):
+                uses_tabs = True
+            elif line.startswith(" "):
+                uses_spaces = True
+
+        # Special rules for certain file types
+        file_ext = Path(file_path).suffix.lower()
+        is_yaml = file_ext in [".yml", ".yaml"]
+        is_makefile = "Makefile" in Path(file_path).name or file_ext == ".mk"
+        is_python = file_ext == ".py"
+
+        # Check each line for issues
+        for line_num, line in enumerate(lines, 1):
+            if not line.strip():  # Skip empty lines
+                continue
+
+            # Get leading whitespace
+            leading_ws = line[: len(line) - len(line.lstrip())]
+
+            if is_yaml:
+                # YAML should use spaces only
+                if "\t" in leading_ws:
+                    issues.append(
+                        f"Line {line_num}: Tab character in YAML file (should use spaces)"
+                    )
+            elif is_makefile:
+                # Makefiles need tabs for recipe lines
+                # But can use spaces for variable definitions
+                stripped = line.lstrip()
+                if stripped and not stripped.startswith("#"):
+                    # Check if this looks like a recipe line (follows a target)
+                    if line_num > 1 and lines[line_num - 2].strip().endswith(":"):
+                        if leading_ws and not leading_ws.startswith("\t"):
+                            issues.append(
+                                f"Line {line_num}: Recipe line should start with tab"
+                            )
+            elif is_python:
+                # Python should use spaces only
+                if "\t" in leading_ws:
+                    issues.append(
+                        f"Line {line_num}: Tab character in Python file (PEP 8 recommends spaces)"
+                    )
+            else:
+                # For other files, check for mixed indentation
+                if leading_ws:
+                    has_tabs = "\t" in leading_ws
+                    has_spaces = " " in leading_ws
+
+                    if has_tabs and has_spaces:
+                        issues.append(
+                            f"Line {line_num}: Mixed tabs and spaces in indentation"
+                        )
+                    elif uses_tabs and uses_spaces:
+                        # File uses both styles
+                        if has_tabs and not uses_tabs:
+                            issues.append(
+                                f"Line {line_num}: Tab indentation (file mostly uses spaces)"
+                            )
+                        elif has_spaces and not uses_spaces:
+                            issues.append(
+                                f"Line {line_num}: Space indentation (file mostly uses tabs)"
+                            )
+
+    except Exception as e:
+        issues.append(f"Error reading file: {e}")
+
+    return issues
+
+
+def main():
+    """Main function to scan for indentation issues"""
+    if len(sys.argv) > 1:
+        paths = sys.argv[1:]
+    else:
+        # Default to git tracked files with modifications
+        import subprocess
+
+        try:
+            result = subprocess.run(
+                ["git", "diff", "--name-only"],
+                capture_output=True,
+                text=True,
+                check=True,
+            )
+            paths = result.stdout.strip().split("\n") if result.stdout.strip() else []
+            if not paths:
+                print("No modified files found in git")
+                return 0
+        except subprocess.CalledProcessError:
+            print("Error: Not in a git repository or git command failed")
+            return 1
+        except FileNotFoundError:
+            print("Error: git command not found")
+            return 1
+
+    total_issues = 0
+    files_with_issues = 0
+
+    for path_str in paths:
+        if not path_str:
+            continue
+
+        path = Path(path_str)
+        if not path.exists() or not path.is_file():
+            continue
+
+        # Skip binary files and certain extensions
+        if path.suffix in [".pyc", ".so", ".o", ".bin", ".jpg", ".png", ".gif", ".ico"]:
+            continue
+
+        issues = check_file_indentation(path)
+        if issues:
+            files_with_issues += 1
+            total_issues += len(issues)
+            print(f"\n{path}:")
+            for issue in issues:
+                print(f"  ⚠️  {issue}")
+
+    if total_issues > 0:
+        print(
+            f"\nSummary: {total_issues} indentation issues found in {files_with_issues} files"
+        )
+        print("\nTo fix these issues:")
+        print("- Use spaces only in YAML and Python files")
+        print("- Use tabs for Makefile recipe lines")
+        print("- Be consistent with indentation style within each file")
+        return 1
+    else:
+        print("✅ No indentation issues found!")
+        return 0
+
+
+if __name__ == "__main__":
+    sys.exit(main())
diff --git a/scripts/ensure_newlines.py b/scripts/ensure_newlines.py
new file mode 100755
index 00000000..969cf32a
--- /dev/null
+++ b/scripts/ensure_newlines.py
@@ -0,0 +1,75 @@
+#!/usr/bin/env python3
+"""
+Ensure all text files end with a newline
+"""
+
+import os
+import sys
+from pathlib import Path
+
+
+def needs_newline(file_path):
+    """Check if file needs a newline at the end"""
+    try:
+        with open(file_path, "rb") as f:
+            content = f.read()
+            if not content:
+                return False
+            # Skip binary files
+            if b"\0" in content:
+                return False
+            return not content.endswith(b"\n")
+    except:
+        return False
+
+
+def add_newline(file_path):
+    """Add newline to end of file"""
+    try:
+        with open(file_path, "a") as f:
+            f.write("\n")
+        return True
+    except:
+        return False
+
+
+def main():
+    """Main function"""
+    extensions = [".py", ".yml", ".yaml", ".sh", ".md", ".txt", ".j2", ".cfg"]
+    filenames = ["Makefile", "Kconfig", "hosts", ".gitignore", "LICENSE"]
+
+    fixed_count = 0
+
+    for root, dirs, files in os.walk("."):
+        # Skip hidden directories and common non-source directories
+        dirs[:] = [
+            d
+            for d in dirs
+            if not d.startswith(".") and d not in ["__pycache__", "node_modules"]
+        ]
+
+        for file in files:
+            file_path = os.path.join(root, file)
+
+            # Check if file should be processed
+            should_process = False
+            if any(file.endswith(ext) for ext in extensions):
+                should_process = True
+            elif (
+                file in filenames
+                or file.startswith("Makefile")
+                or file.endswith("Kconfig")
+            ):
+                should_process = True
+
+            if should_process and needs_newline(file_path):
+                if add_newline(file_path):
+                    print(f"Added newline to: {file_path}")
+                    fixed_count += 1
+
+    print(f"\nFixed {fixed_count} files")
+    return 0
+
+
+if __name__ == "__main__":
+    sys.exit(main())
diff --git a/scripts/fix_indentation_issues.py b/scripts/fix_indentation_issues.py
new file mode 100755
index 00000000..42a5ef9d
--- /dev/null
+++ b/scripts/fix_indentation_issues.py
@@ -0,0 +1,152 @@
+#!/usr/bin/env python3
+"""
+Fix indentation issues in files - convert between tabs and spaces as appropriate
+"""
+
+import os
+import sys
+from pathlib import Path
+
+
+def fix_file_indentation(file_path, dry_run=False):
+    """Fix indentation in a single file"""
+    fixed = False
+
+    try:
+        with open(file_path, "rb") as f:
+            content = f.read()
+
+        # Skip binary files
+        if b"\0" in content:
+            return False
+
+        original_content = content
+        lines = content.decode("utf-8", errors="ignore").splitlines(keepends=True)
+
+        # Determine file type
+        file_ext = Path(file_path).suffix.lower()
+        is_yaml = file_ext in [".yml", ".yaml"]
+        is_makefile = "Makefile" in Path(file_path).name or file_ext == ".mk"
+        is_python = file_ext == ".py"
+
+        new_lines = []
+
+        for line_num, line in enumerate(lines, 1):
+            if not line.strip():  # Keep empty lines as-is
+                new_lines.append(line)
+                continue
+
+            # Get leading whitespace
+            leading_ws = line[: len(line) - len(line.lstrip())]
+            rest_of_line = line[len(leading_ws) :]
+
+            if is_yaml or is_python:
+                # Convert any tabs to spaces (4 spaces per tab)
+                if "\t" in leading_ws:
+                    new_leading = leading_ws.replace("\t", "    ")
+                    new_line = new_leading + rest_of_line
+                    new_lines.append(new_line)
+                    if not dry_run:
+                        print(f"  ✅ Line {line_num}: Converted tabs to spaces")
+                    fixed = True
+                else:
+                    new_lines.append(line)
+            elif is_makefile:
+                # Check if this is a recipe line
+                if line_num > 1 and lines[line_num - 2].strip().endswith(":"):
+                    # Recipe line should start with tab
+                    if leading_ws and not leading_ws.startswith("\t"):
+                        # Convert leading spaces to tab
+                        space_count = len(leading_ws)
+                        tab_count = (space_count + 3) // 4  # Round up
+                        new_leading = "\t" * tab_count
+                        new_line = new_leading + rest_of_line
+                        new_lines.append(new_line)
+                        if not dry_run:
+                            print(
+                                f"  ✅ Line {line_num}: Converted spaces to tabs (Makefile recipe)"
+                            )
+                        fixed = True
+                    else:
+                        new_lines.append(line)
+                else:
+                    new_lines.append(line)
+            else:
+                new_lines.append(line)
+
+        if fixed and not dry_run:
+            new_content = "".join(new_lines).encode("utf-8")
+            with open(file_path, "wb") as f:
+                f.write(new_content)
+
+    except Exception as e:
+        print(f"  ❌ Error processing file: {e}")
+        return False
+
+    return fixed
+
+
+def main():
+    """Main function to fix indentation issues"""
+    import argparse
+
+    parser = argparse.ArgumentParser(description="Fix indentation issues in files")
+    parser.add_argument(
+        "paths", nargs="*", help="Files to check (default: all git tracked files)"
+    )
+    parser.add_argument(
+        "--dry-run",
+        action="store_true",
+        help="Show what would be fixed without changing files",
+    )
+
+    args = parser.parse_args()
+
+    if args.paths:
+        paths = args.paths
+    else:
+        # Default to git tracked files
+        import subprocess
+
+        try:
+            result = subprocess.run(
+                ["git", "ls-files"], capture_output=True, text=True, check=True
+            )
+            paths = result.stdout.strip().split("\n") if result.stdout.strip() else []
+        except subprocess.CalledProcessError:
+            print("Error: Not in a git repository")
+            return 1
+
+    total_fixed = 0
+
+    for path_str in paths:
+        if not path_str:
+            continue
+
+        path = Path(path_str)
+        if not path.exists() or not path.is_file():
+            continue
+
+        # Skip binary files and certain extensions
+        if path.suffix in [".pyc", ".so", ".o", ".bin", ".jpg", ".png", ".gif", ".ico"]:
+            continue
+
+        if fix_file_indentation(path, args.dry_run):
+            total_fixed += 1
+            if not args.dry_run:
+                print(f"{path}:")
+
+    if total_fixed > 0:
+        if args.dry_run:
+            print(f"\nWould fix indentation in {total_fixed} files")
+            print("Run without --dry-run to apply fixes")
+        else:
+            print(f"\n✅ Fixed indentation in {total_fixed} files")
+    else:
+        print("✅ No indentation issues found!")
+
+    return 0
+
+
+if __name__ == "__main__":
+    sys.exit(main())
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 5/9] Makefile: move styling to scripts/style.Makefile
  2025-07-30  6:01 [PATCH v2 0/9] kdevops: add support for A/B testing Luis Chamberlain
                   ` (3 preceding siblings ...)
  2025-07-30  6:01 ` [PATCH v2 4/9] style: add extensive code formatting checks to make style Luis Chamberlain
@ 2025-07-30  6:01 ` Luis Chamberlain
  2025-07-30  6:01 ` [PATCH v2 6/9] CLAUDE.md: add instrucitons to verify commit Luis Chamberlain
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Luis Chamberlain @ 2025-07-30  6:01 UTC (permalink / raw)
  To: Chuck Lever, Daniel Gomez, kdevops; +Cc: Luis Chamberlain

Monitoring styling enhancements turn out to be its own thing, move
this effort to its own file to keep the top level Makefile tidy.

Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 Makefile               | 13 +------------
 scripts/style.Makefile | 12 ++++++++++++
 2 files changed, 13 insertions(+), 12 deletions(-)
 create mode 100644 scripts/style.Makefile

diff --git a/Makefile b/Makefile
index 2c72042d..c88637c2 100644
--- a/Makefile
+++ b/Makefile
@@ -246,18 +246,7 @@ include scripts/tests.Makefile
 include scripts/ci.Makefile
 include scripts/archive.Makefile
 include scripts/defconfig.Makefile
-
-PHONY += fix-whitespace-last-commit
-fix-whitespace-last-commit:
-	$(Q)git diff --name-only --diff-filter=M HEAD~1..HEAD | xargs -r python3 scripts/fix_whitespace_issues.py
-
-PHONY += style
-style:
-	$(Q)if which black > /dev/null ; then black . || true; fi
-	$(Q)python3 scripts/detect_whitespace_issues.py || true
-	$(Q)python3 scripts/detect_indentation_issues.py || true
-	$(Q)python3 scripts/check_commit_format.py || true
-	$(Q)python3 scripts/ensure_newlines.py || true
+include scripts/style.Makefile
 
 PHONY += clean
 clean:
diff --git a/scripts/style.Makefile b/scripts/style.Makefile
new file mode 100644
index 00000000..2108eb4c
--- /dev/null
+++ b/scripts/style.Makefile
@@ -0,0 +1,12 @@
+PHONY += fix-whitespace-last-commit
+fix-whitespace-last-commit:
+	$(Q)git diff --name-only --diff-filter=M HEAD~1..HEAD | xargs -r python3 scripts/fix_whitespace_issues.py
+
+PHONY += style
+style:
+	$(Q)if which black > /dev/null ; then black . || true; fi
+	$(Q)python3 scripts/detect_whitespace_issues.py || true
+	$(Q)python3 scripts/detect_indentation_issues.py || true
+	$(Q)python3 scripts/check_commit_format.py || true
+	$(Q)python3 scripts/ensure_newlines.py || true
+
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 6/9] CLAUDE.md: add instrucitons to verify commit
  2025-07-30  6:01 [PATCH v2 0/9] kdevops: add support for A/B testing Luis Chamberlain
                   ` (4 preceding siblings ...)
  2025-07-30  6:01 ` [PATCH v2 5/9] Makefile: move styling to scripts/style.Makefile Luis Chamberlain
@ 2025-07-30  6:01 ` Luis Chamberlain
  2025-07-30  6:01 ` [PATCH v2 7/9] all: run black Luis Chamberlain
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Luis Chamberlain @ 2025-07-30  6:01 UTC (permalink / raw)
  To: Chuck Lever, Daniel Gomez, kdevops; +Cc: Luis Chamberlain

White space damange is just a pain with bots, so we need to also
give it a separate instruction to verify the commit is not adding
new file with space damage.

Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 CLAUDE.md | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/CLAUDE.md b/CLAUDE.md
index c52774b5..2bcf9443 100644
--- a/CLAUDE.md
+++ b/CLAUDE.md
@@ -121,6 +121,7 @@ make V=1 [target]       # Verbose build output
 make AV=1-6 [target]    # Ansible verbose output (levels 0-6)
 make dynconfig          # Generate dynamic configuration
 make style              # Check for whitespace issues - ALWAYS run before completing work
+make fix-whitespace-last-commit # Fixes commit white space damage
 make mrproper           # Clean everything and restart from scratch
 ```
 
@@ -428,6 +429,15 @@ The fixer script will:
 
 Always run `make style` after using the fixer to verify all issues are resolved.
 
+### Verifying commit has no white space damage
+
+Run the following after you commit something:
+```bash
+make fix-whitespace-last-commit
+```
+
+This will fix all white space only for new files you add.
+
 ## Complex System Interactions
 
 kdevops integrates multiple subsystems (Ansible, Kconfig, Git, Make) that often
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 7/9] all: run black
  2025-07-30  6:01 [PATCH v2 0/9] kdevops: add support for A/B testing Luis Chamberlain
                   ` (5 preceding siblings ...)
  2025-07-30  6:01 ` [PATCH v2 6/9] CLAUDE.md: add instrucitons to verify commit Luis Chamberlain
@ 2025-07-30  6:01 ` Luis Chamberlain
  2025-07-31 12:57   ` Daniel Gomez
  2025-07-30  6:01 ` [PATCH v2 8/9] devconfig: add automatic APT mirror fallback for Debian testing Luis Chamberlain
  2025-07-30  6:01 ` [PATCH v2 9/9] bootlinux: add support for A/B kernel testing Luis Chamberlain
  8 siblings, 1 reply; 19+ messages in thread
From: Luis Chamberlain @ 2025-07-30  6:01 UTC (permalink / raw)
  To: Chuck Lever, Daniel Gomez, kdevops; +Cc: Luis Chamberlain

Run black to fix tons of styling issues with tons of Python scripts.
In order to help bots ensure they don't add odd styling we need a
convention.

Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 .../blktests/augment_expunge_list.py          |  95 ++--
 .../workflows/blktests/gen-expunge-args.py    |  46 +-
 .../workflows/blktests/gen-results-dir.py     |  39 +-
 .../blktests/get_new_expunge_files.py         |  16 +-
 .../dynamic-kconfig/gen-dynamic-pci.py        |  89 ++--
 .../workflows/fstests/augment_expunge_list.py | 163 +++++--
 .../workflows/fstests/bad_files_summary.py    |  46 +-
 .../fstests/fstests-checktime-distribution.py |  46 +-
 .../workflows/fstests/gen_results_summary.py  | 132 +++---
 .../fstests/get_new_expunge_files.py          |  24 +-
 playbooks/python/workflows/fstests/lib/git.py |  21 +-
 .../workflows/fstests/xunit_merge_all.py      |  33 +-
 .../sysbench/sysbench-tps-compare.py          |  84 +++-
 .../workflows/sysbench/sysbench-tps-plot.py   |  36 +-
 .../sysbench/sysbench-tps-variance.py         | 435 +++++++++++++-----
 .../gen_pcie_passthrough_guestfs_xml.py       |  49 +-
 .../linux-mirror/python/gen-mirror-files.py   | 131 +++---
 .../linux-mirror/python/start-mirroring.py    | 116 +++--
 scripts/check_commit_format.py                |  28 +-
 .../generation/check_for_atomic_calls.py      |  71 +--
 .../generation/check_for_sleepy_calls.py      | 202 +++++---
 scripts/detect_whitespace_issues.py           |  38 +-
 scripts/fix_whitespace_issues.py              |  44 +-
 scripts/generate_refs.py                      |   6 +-
 scripts/honey-badger.py                       | 103 +++--
 scripts/spdxcheck.py                          | 201 ++++----
 scripts/update_ssh_config_guestfs.py          |  49 +-
 .../workflows/blktests/blktests_watchdog.py   |  75 ++-
 scripts/workflows/cxl/gen_qemu_cxl.py         | 235 +++++++---
 scripts/workflows/fstests/fstests_watchdog.py |  99 ++--
 scripts/workflows/generic/crash_report.py     |   4 +-
 scripts/workflows/generic/crash_watchdog.py   |  78 +++-
 scripts/workflows/lib/blktests.py             |  47 +-
 scripts/workflows/lib/crash.py                |  12 +-
 scripts/workflows/lib/fstests.py              | 155 ++++---
 scripts/workflows/lib/kssh.py                 | 178 ++++---
 scripts/workflows/lib/systemd_remote.py       | 101 ++--
 .../workflows/pynfs/check_pynfs_results.py    |  17 +-
 38 files changed, 2250 insertions(+), 1094 deletions(-)

diff --git a/playbooks/python/workflows/blktests/augment_expunge_list.py b/playbooks/python/workflows/blktests/augment_expunge_list.py
index 4fe856a3..a7b8eb42 100755
--- a/playbooks/python/workflows/blktests/augment_expunge_list.py
+++ b/playbooks/python/workflows/blktests/augment_expunge_list.py
@@ -15,58 +15,81 @@ import configparser
 from itertools import chain
 
 oscheck_ansible_python_dir = os.path.dirname(os.path.abspath(__file__))
-oscheck_sort_expunge = oscheck_ansible_python_dir + "/../../../scripts/workflows/fstests/sort-expunges.sh"
+oscheck_sort_expunge = (
+    oscheck_ansible_python_dir + "/../../../scripts/workflows/fstests/sort-expunges.sh"
+)
 top_dir = oscheck_ansible_python_dir + "/../../../../"
-blktests_last_kernel = top_dir + 'workflows/blktests/results/last-kernel.txt'
+blktests_last_kernel = top_dir + "workflows/blktests/results/last-kernel.txt"
 expunge_name = "failures.txt"
 
+
 def append_line(output_file, test_failure_line):
     # We want to now add entries like block/xxx where xxx are digits
     output = open(output_file, "a+")
     output.write("%s\n" % test_failure_line)
     output.close()
 
+
 def is_config_bool_true(config, name):
-    if name in config and config[name].strip('\"') == "y":
+    if name in config and config[name].strip('"') == "y":
         return True
     return False
 
+
 def config_string(config, name):
     if name in config:
-        return config[name].strip('\"')
+        return config[name].strip('"')
     return None
 
+
 def get_config(dotconfig):
-    config = configparser.ConfigParser(allow_no_value=True, strict=False, interpolation=None)
+    config = configparser.ConfigParser(
+        allow_no_value=True, strict=False, interpolation=None
+    )
     with open(dotconfig) as lines:
         lines = chain(("[top]",), lines)
         config.read_file(lines)
         return config["top"]
     return None
 
+
 def read_blktest_last_kernel():
     if not os.path.isfile(blktests_last_kernel):
         return None
-    kfile = open(blktests_last_kernel, 'r')
+    kfile = open(blktests_last_kernel, "r")
     all_lines = kfile.readlines()
     kfile.close()
     for line in all_lines:
         return line.strip()
     return None
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Augments expunge list for blktest')
-    parser.add_argument('results', metavar='<directory with results>', type=str,
-                        help='directory with results file')
-    parser.add_argument('outputdir', metavar='<output directory>', type=str,
-                        help='The directory where to generate the expunge failure.txt ')
-    parser.add_argument('--verbose', const=True, default=False, action="store_const",
-                        help='Print more verbose information')
+    parser = argparse.ArgumentParser(description="Augments expunge list for blktest")
+    parser.add_argument(
+        "results",
+        metavar="<directory with results>",
+        type=str,
+        help="directory with results file",
+    )
+    parser.add_argument(
+        "outputdir",
+        metavar="<output directory>",
+        type=str,
+        help="The directory where to generate the expunge failure.txt ",
+    )
+    parser.add_argument(
+        "--verbose",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Print more verbose information",
+    )
     args = parser.parse_args()
 
     expunge_kernel_dir = ""
 
-    dotconfig = top_dir + '/.config'
+    dotconfig = top_dir + "/.config"
     config = get_config(dotconfig)
     if not config:
         sys.stdout.write("%s does not exist\n" % (dotconfig))
@@ -85,7 +108,7 @@ def main():
                 continue
             if not os.path.isfile(f):
                 continue
-            if not f.endswith('.bad') and not f.endswith('.dmesg'):
+            if not f.endswith(".bad") and not f.endswith(".dmesg"):
                 continue
 
             bad_files.append(f)
@@ -97,13 +120,13 @@ def main():
         # f may be results/last-run/nodev/meta/009.dmesg
         bad_file_list = f.split("/")
         bad_file_list_len = len(bad_file_list) - 1
-        bad_file =      bad_file_list[bad_file_list_len]
-        test_group =    bad_file_list[bad_file_list_len-1]
-        bdev =          bad_file_list[bad_file_list_len-2]
+        bad_file = bad_file_list[bad_file_list_len]
+        test_group = bad_file_list[bad_file_list_len - 1]
+        bdev = bad_file_list[bad_file_list_len - 2]
 
         if args.verbose:
             sys.stdout.write("%s\n" % bad_file_list)
-            sys.stdout.write("\tbad_file: %s\n" %bad_file)
+            sys.stdout.write("\tbad_file: %s\n" % bad_file)
             sys.stdout.write("\ttest_group: %s\n" % test_group)
             sys.stdout.write("\tkernel: %s\n" % kernel)
 
@@ -126,11 +149,11 @@ def main():
             sys.exit(1)
 
         # This is like for example block/xxx where xxx are digits
-        test_failure_line = test_group + '/' + bad_file_test_number
+        test_failure_line = test_group + "/" + bad_file_test_number
 
         # now to stuff this into expunge files such as:
         # expunges/sles/15.3/failures.txt
-        expunge_kernel_dir = args.outputdir + '/' + kernel + '/'
+        expunge_kernel_dir = args.outputdir + "/" + kernel + "/"
         output_dir = expunge_kernel_dir
         output_file = output_dir + expunge_name
         shortcut_kernel_dir = None
@@ -145,19 +168,23 @@ def main():
                 sles_release_name = config_string(config, "CONFIG_KDEVOPS_HOSTS_PREFIX")
                 sles_release_parts = sles_release_name.split("sp")
                 if len(sles_release_parts) <= 1:
-                    sys.stderr.write("Unexpected sles_release_name: %s\n" % sles_release_name)
+                    sys.stderr.write(
+                        "Unexpected sles_release_name: %s\n" % sles_release_name
+                    )
                     sys.exit(1)
-                sles_point_release = sles_release_parts[0].split("sles")[1] + "." + sles_release_parts[1]
+                sles_point_release = (
+                    sles_release_parts[0].split("sles")[1] + "." + sles_release_parts[1]
+                )
 
                 # This becomes generic release directory, not specific to any
                 # kernel.
-                shortcut_dir = args.outputdir + '/' + "sles/" + sles_point_release + '/'
+                shortcut_dir = args.outputdir + "/" + "sles/" + sles_point_release + "/"
                 shortcut_kernel_dir = shortcut_dir
                 shortcut_file = shortcut_dir + expunge_name
             else:
                 ksplit = kernel.split(".")
                 shortcut_kernel = ksplit[0] + "." + ksplit[1] + "." + ksplit[2]
-                shortcut_kernel_dir = args.outputdir + '/' + shortcut_kernel + '/'
+                shortcut_kernel_dir = args.outputdir + "/" + shortcut_kernel + "/"
                 shortcut_dir = shortcut_kernel_dir
                 shortcut_file = shortcut_dir + expunge_name
 
@@ -170,10 +197,13 @@ def main():
                 os.makedirs(output_dir)
 
         if not os.path.isfile(output_file):
-            sys.stdout.write("====%s/%s new failure found file was empty\n" % (test_group, test_failure_line))
+            sys.stdout.write(
+                "====%s/%s new failure found file was empty\n"
+                % (test_group, test_failure_line)
+            )
             append_line(output_file, test_failure_line)
         else:
-            existing_file = open(output_file, 'r')
+            existing_file = open(output_file, "r")
             all_lines = existing_file.readlines()
             existing_file.close()
             found = False
@@ -182,13 +212,18 @@ def main():
                     found = True
                     break
             if not found:
-                sys.stdout.write("%s %s new failure found\n" % (test_group, test_failure_line))
+                sys.stdout.write(
+                    "%s %s new failure found\n" % (test_group, test_failure_line)
+                )
                 append_line(output_file, test_failure_line)
 
     if expunge_kernel_dir != "":
         sys.stdout.write("Sorting %s ...\n" % (expunge_kernel_dir))
-        sys.stdout.write("Running %s %s...\n" % (oscheck_sort_expunge, expunge_kernel_dir))
+        sys.stdout.write(
+            "Running %s %s...\n" % (oscheck_sort_expunge, expunge_kernel_dir)
+        )
         subprocess.call([oscheck_sort_expunge, expunge_kernel_dir])
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/blktests/gen-expunge-args.py b/playbooks/python/workflows/blktests/gen-expunge-args.py
index e73713f8..4d7f6d4c 100755
--- a/playbooks/python/workflows/blktests/gen-expunge-args.py
+++ b/playbooks/python/workflows/blktests/gen-expunge-args.py
@@ -14,16 +14,37 @@ import os
 import sys
 import subprocess
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Generates expunge arguments to run blktests check based on results directory')
-    parser.add_argument('--test-group', metavar='<group>', type=str,
-                        help='group of tests to focus on otherwise all groups are considered')
-    parser.add_argument('results', metavar='<directory with blktests results>', type=str,
-                        help='directory with blktests results')
-    parser.add_argument('--gen-exclude-args', const=True, default=False, action="store_const",
-                        help='Generate exclude arguments so to be passed to blktests check')
-    parser.add_argument('--verbose', const=True, default=False, action="store_const",
-                        help='Print more verbose information')
+    parser = argparse.ArgumentParser(
+        description="Generates expunge arguments to run blktests check based on results directory"
+    )
+    parser.add_argument(
+        "--test-group",
+        metavar="<group>",
+        type=str,
+        help="group of tests to focus on otherwise all groups are considered",
+    )
+    parser.add_argument(
+        "results",
+        metavar="<directory with blktests results>",
+        type=str,
+        help="directory with blktests results",
+    )
+    parser.add_argument(
+        "--gen-exclude-args",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Generate exclude arguments so to be passed to blktests check",
+    )
+    parser.add_argument(
+        "--verbose",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Print more verbose information",
+    )
     args = parser.parse_args()
 
     bad_files = []
@@ -34,7 +55,7 @@ def main():
                 continue
             if not os.path.isfile(f):
                 continue
-            if f.endswith('.bad') or f.endswith('.dmesg'):
+            if f.endswith(".bad") or f.endswith(".dmesg"):
                 bad_files.append(f)
                 continue
     exclude_args = ""
@@ -58,12 +79,13 @@ def main():
         if args.test_group and args.test_group != group:
             continue
         if args.gen_exclude_args:
-            exclude_args += (" -x %s/%s" % (group, fail))
+            exclude_args += " -x %s/%s" % (group, fail)
         else:
             sys.stdout.write("%s/%s\n" % (group, fail))
 
     if args.gen_exclude_args:
         sys.stdout.write("%s\n" % (exclude_args))
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/blktests/gen-results-dir.py b/playbooks/python/workflows/blktests/gen-results-dir.py
index cb3e76aa..e18504bc 100755
--- a/playbooks/python/workflows/blktests/gen-results-dir.py
+++ b/playbooks/python/workflows/blktests/gen-results-dir.py
@@ -18,7 +18,8 @@ oscheck_ansible_python_dir = os.path.dirname(os.path.abspath(__file__))
 top_dir = oscheck_ansible_python_dir + "/../../../../"
 results_dir = top_dir + "workflows/blktests/results/"
 last_run_dir = results_dir + "last-run/"
-blktests_last_kernel = top_dir + 'workflows/blktests/results/last-kernel.txt'
+blktests_last_kernel = top_dir + "workflows/blktests/results/last-kernel.txt"
+
 
 def clean_empty_dir(target_results):
     for i in range(1, 3):
@@ -31,12 +32,23 @@ def clean_empty_dir(target_results):
                 else:
                     clean_empty_dir(f)
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Get list of expunge files not yet committed in git')
-    parser.add_argument('--clean-dir-only', metavar='<clean_dir_only>', type=str, default='none',
-                        help='Do not perform an evaluation, just clean empty directories on the specified directory')
-    parser.add_argument('--copy-all', action='store_true',
-                        help='Copy all test results without filtering')
+    parser = argparse.ArgumentParser(
+        description="Get list of expunge files not yet committed in git"
+    )
+    parser.add_argument(
+        "--clean-dir-only",
+        metavar="<clean_dir_only>",
+        type=str,
+        default="none",
+        help="Do not perform an evaluation, just clean empty directories on the specified directory",
+    )
+    parser.add_argument(
+        "--copy-all",
+        action="store_true",
+        help="Copy all test results without filtering",
+    )
     args = parser.parse_args()
 
     if not os.path.isfile(blktests_last_kernel):
@@ -44,7 +56,7 @@ def main():
         sys.exit(1)
 
     kernel = None
-    f = open(blktests_last_kernel, 'r')
+    f = open(blktests_last_kernel, "r")
     for line in f:
         kernel = line.strip()
     if not line:
@@ -56,9 +68,11 @@ def main():
         clean_empty_dir(args.clean_dir_only)
         sys.exit(0)
 
-    target_results = results_dir + kernel + '/'
+    target_results = results_dir + kernel + "/"
     if not os.path.isdir(last_run_dir):
-        sys.stdout.write("Ignoring last-run directory %s as it is empty ...\n" % (last_run_dir))
+        sys.stdout.write(
+            "Ignoring last-run directory %s as it is empty ...\n" % (last_run_dir)
+        )
         sys.exit(0)
     sys.stdout.write("Copying %s to %s ...\n" % (last_run_dir, target_results))
     copytree(last_run_dir, target_results, dirs_exist_ok=True)
@@ -89,8 +103,8 @@ def main():
                     test_name = test_name_file_list[0]
 
                 test_dir = os.path.dirname(f)
-                name_lookup_base = test_dir + '/' + test_name + '*'
-                name_lookup = test_dir + '/' + test_name + '.*'
+                name_lookup_base = test_dir + "/" + test_name + "*"
+                name_lookup = test_dir + "/" + test_name + ".*"
                 listing = glob.glob(name_lookup)
                 bad_ext_found = False
                 if len(listing) > 0:
@@ -102,5 +116,6 @@ def main():
                         os.unlink(r)
     clean_empty_dir(target_results)
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/blktests/get_new_expunge_files.py b/playbooks/python/workflows/blktests/get_new_expunge_files.py
index 45f0e1a3..f1f1241b 100755
--- a/playbooks/python/workflows/blktests/get_new_expunge_files.py
+++ b/playbooks/python/workflows/blktests/get_new_expunge_files.py
@@ -14,10 +14,17 @@ import sys
 import subprocess
 from lib import git
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Get list of expunge files not yet committed in git')
-    parser.add_argument('expunge_dir', metavar='<directory with expunge files>', type=str,
-                        help='directory with expunge files')
+    parser = argparse.ArgumentParser(
+        description="Get list of expunge files not yet committed in git"
+    )
+    parser.add_argument(
+        "expunge_dir",
+        metavar="<directory with expunge files>",
+        type=str,
+        help="directory with expunge files",
+    )
     args = parser.parse_args()
 
     block_expunge_dir = args.expunge_dir
@@ -36,5 +43,6 @@ def main():
                     short_file = f.split("../")[1]
                 sys.stdout.write("%s\n" % (short_file))
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/dynamic-kconfig/gen-dynamic-pci.py b/playbooks/python/workflows/dynamic-kconfig/gen-dynamic-pci.py
index ed984ae1..107bb25b 100755
--- a/playbooks/python/workflows/dynamic-kconfig/gen-dynamic-pci.py
+++ b/playbooks/python/workflows/dynamic-kconfig/gen-dynamic-pci.py
@@ -15,27 +15,31 @@ sys_bus_prefix = "/sys/bus/pci/devices/"
 
 debug = 0
 
+
 def get_first_dir(path):
     if len(os.listdir(path)) > 0:
         return os.listdir(path)[0]
     return None
 
+
 def get_sysname(sys_path, entry):
     sys_entry_path = sys_path + entry
     if not os.path.isfile(sys_entry_path):
         return None
-    entry_fd = open(sys_entry_path, 'r')
+    entry_fd = open(sys_entry_path, "r")
     line = entry_fd.readlines()[0]
     line = line.strip()
     entry_fd.close()
     return line
 
+
 # kconfig does not like some characters
 def strip_kconfig_name(name):
-    fixed_name = name.replace("\"", "")
+    fixed_name = name.replace('"', "")
     fixed_name = fixed_name.replace("'", "")
     return fixed_name
 
+
 def get_special_device_nvme(pci_id, IOMMUGroup):
     pci_id_name = strip_kconfig_name(pci_id)
     sys_path = sys_bus_prefix + pci_id + "/nvme/"
@@ -51,7 +55,13 @@ def get_special_device_nvme(pci_id, IOMMUGroup):
     fw = get_sysname(block_sys_path, "firmware_rev")
     if not fw:
         return None
-    return "%s IOMMU group %s - /dev/%s - %s with FW %s" % (pci_id_name, IOMMUGroup, block_device_name, model, fw)
+    return "%s IOMMU group %s - /dev/%s - %s with FW %s" % (
+        pci_id_name,
+        IOMMUGroup,
+        block_device_name,
+        model,
+        fw,
+    )
 
 
 def get_kconfig_device_name(pci_id, sdevice, IOMMUGroup):
@@ -63,26 +73,31 @@ def get_kconfig_device_name(pci_id, sdevice, IOMMUGroup):
         return strip_kconfig_name(default_name)
     return strip_kconfig_name(special_name)
 
+
 def add_pcie_kconfig_string(prefix, val, name):
     config_name = prefix + "_" + name.upper()
     sys.stdout.write("config %s\n" % (config_name))
     sys.stdout.write("\tstring\n")
-    sys.stdout.write("\tdefault \"%s\"\n" % (strip_kconfig_name(str(val))))
+    sys.stdout.write('\tdefault "%s"\n' % (strip_kconfig_name(str(val))))
     sys.stdout.write("\n")
 
+
 def add_pcie_kconfig_name(config_name, sdevice):
     sys.stdout.write("config %s\n" % (config_name))
-    sys.stdout.write("\tbool \"%s\"\n" % (sdevice))
+    sys.stdout.write('\tbool "%s"\n' % (sdevice))
     sys.stdout.write("\tdefault n\n")
     sys.stdout.write("\thelp\n")
     sys.stdout.write("\t  Enabling this will PCI-E passthrough this device onto the\n")
     sys.stdout.write("\t  target guest.\n")
     sys.stdout.write("\n")
 
+
 def add_pcie_kconfig_target(config_name, sdevice):
     sys.stdout.write("config %s_TARGET_GUEST\n" % (config_name))
-    sys.stdout.write("\tstring  \"Taret guest to offload %s\"\n" % (strip_kconfig_name(sdevice)))
-    sys.stdout.write("\tdefault \"\"\n")
+    sys.stdout.write(
+        '\tstring  "Taret guest to offload %s"\n' % (strip_kconfig_name(sdevice))
+    )
+    sys.stdout.write('\tdefault ""\n')
     sys.stdout.write("\tdepends on %s\n" % config_name)
     sys.stdout.write("\tdepends on KDEVOPS_LIBVIRT_PCIE_PASSTHROUGH_TYPE_EACH\n")
     sys.stdout.write("\thelp\n")
@@ -90,7 +105,10 @@ def add_pcie_kconfig_target(config_name, sdevice):
     sys.stdout.write("\t  target guest.\n")
     sys.stdout.write("\n")
 
-def add_pcie_kconfig_entry(pci_id, sdevice, domain, bus, slot, function, IOMMUGroup, config_id):
+
+def add_pcie_kconfig_entry(
+    pci_id, sdevice, domain, bus, slot, function, IOMMUGroup, config_id
+):
     prefix = passthrough_prefix + "_%04d" % config_id
     name = get_kconfig_device_name(pci_id, sdevice, IOMMUGroup)
     add_pcie_kconfig_name(prefix, name)
@@ -104,22 +122,23 @@ def add_pcie_kconfig_entry(pci_id, sdevice, domain, bus, slot, function, IOMMUGr
     add_pcie_kconfig_string(prefix, slot, "slot")
     add_pcie_kconfig_string(prefix, function, "function")
 
+
 def add_new_device(slot, sdevice, IOMMUGroup, possible_id):
     # Example expeced format 0000:2d:00.0
-    m = re.match(r"^(?P<DOMAIN>\w+):"
-                  "(?P<BUS>\w+):"
-                  "(?P<MSLOT>\w+)\."
-                  "(?P<FUNCTION>\w+)$", slot)
+    m = re.match(
+        r"^(?P<DOMAIN>\w+):" "(?P<BUS>\w+):" "(?P<MSLOT>\w+)\." "(?P<FUNCTION>\w+)$",
+        slot,
+    )
     if not m:
         return possible_id
 
     possible_id += 1
 
     slot_dict = m.groupdict()
-    domain = "0x" + slot_dict['DOMAIN']
-    bus = "0x" + slot_dict['BUS']
-    mslot = "0x" + slot_dict['MSLOT']
-    function = "0x" + slot_dict['FUNCTION']
+    domain = "0x" + slot_dict["DOMAIN"]
+    bus = "0x" + slot_dict["BUS"]
+    mslot = "0x" + slot_dict["MSLOT"]
+    function = "0x" + slot_dict["FUNCTION"]
 
     if debug:
         sys.stdout.write("\tslot: %s\n" % (slot))
@@ -130,17 +149,26 @@ def add_new_device(slot, sdevice, IOMMUGroup, possible_id):
         sys.stdout.write("\tIOMMUGroup: %s\n" % (IOMMUGroup))
 
     if possible_id == 1:
-        sys.stdout.write("# Automatically generated PCI-E passthrough Kconfig by kdevops\n\n")
+        sys.stdout.write(
+            "# Automatically generated PCI-E passthrough Kconfig by kdevops\n\n"
+        )
 
-    add_pcie_kconfig_entry(slot, sdevice, domain, bus, mslot, function, IOMMUGroup, possible_id)
+    add_pcie_kconfig_entry(
+        slot, sdevice, domain, bus, mslot, function, IOMMUGroup, possible_id
+    )
 
     return possible_id
 
+
 def main():
     num_candidate_devices = 0
-    parser = argparse.ArgumentParser(description='Creates a Kconfig file lspci output')
-    parser.add_argument('input', metavar='<input file with lspci -Dvmmm output>', type=str,
-                        help='input file wth lspci -Dvmmm output')
+    parser = argparse.ArgumentParser(description="Creates a Kconfig file lspci output")
+    parser.add_argument(
+        "input",
+        metavar="<input file with lspci -Dvmmm output>",
+        type=str,
+        help="input file wth lspci -Dvmmm output",
+    )
     args = parser.parse_args()
 
     lspci_output = args.input
@@ -149,7 +177,7 @@ def main():
         sys.stdout.write("input file did not exist: %s\n" % (lspci_output))
         sys.exit(1)
 
-    lspci = open(lspci_output, 'r')
+    lspci = open(lspci_output, "r")
     all_lines = lspci.readlines()
     lspci.close()
 
@@ -159,17 +187,18 @@ def main():
 
     for line in all_lines:
         line = line.strip()
-        m = re.match(r"^(?P<TAG>\w+):"
-                      "(?P<STRING>.*)$", line)
+        m = re.match(r"^(?P<TAG>\w+):" "(?P<STRING>.*)$", line)
         if not m:
             continue
         eval_line = m.groupdict()
-        tag = eval_line['TAG']
-        data = eval_line['STRING']
+        tag = eval_line["TAG"]
+        data = eval_line["STRING"]
         data = data.strip()
         if tag == "Slot":
             if sdevice:
-                num_candidate_devices = add_new_device(slot, sdevice, IOMMUGroup, num_candidate_devices)
+                num_candidate_devices = add_new_device(
+                    slot, sdevice, IOMMUGroup, num_candidate_devices
+                )
             slot = data
             sdevice = None
             IOMMUGroup = None
@@ -180,11 +209,13 @@ def main():
 
     # Handle the last device
     if sdevice and slot:
-        num_candidate_devices = add_new_device(slot, sdevice, IOMMUGroup, num_candidate_devices)
+        num_candidate_devices = add_new_device(
+            slot, sdevice, IOMMUGroup, num_candidate_devices
+        )
 
     add_pcie_kconfig_string(passthrough_prefix, num_candidate_devices, "NUM_DEVICES")
     os.unlink(lspci_output)
 
 
-if __name__ == '__main__':
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/fstests/augment_expunge_list.py b/playbooks/python/workflows/fstests/augment_expunge_list.py
index 9265cd8b..7f31401e 100755
--- a/playbooks/python/workflows/fstests/augment_expunge_list.py
+++ b/playbooks/python/workflows/fstests/augment_expunge_list.py
@@ -16,44 +16,69 @@ import configparser
 from itertools import chain
 
 oscheck_ansible_python_dir = os.path.dirname(os.path.abspath(__file__))
-oscheck_sort_expunge = oscheck_ansible_python_dir + "/../../../scripts/workflows/fstests/sort-expunges.sh"
+oscheck_sort_expunge = (
+    oscheck_ansible_python_dir + "/../../../scripts/workflows/fstests/sort-expunges.sh"
+)
 top_dir = oscheck_ansible_python_dir + "/../../../../"
 
+
 def append_line(output_file, test_failure_line):
     # We want to now add entries like generic/xxx where xxx are digits
     output = open(output_file, "a+")
     output.write("%s\n" % test_failure_line)
     output.close()
 
+
 def is_config_bool_true(config, name):
-    if name in config and config[name].strip('\"') == "y":
+    if name in config and config[name].strip('"') == "y":
         return True
     return False
 
+
 def get_config(dotconfig):
-    config = configparser.ConfigParser(allow_no_value=True, strict=False, interpolation=None)
+    config = configparser.ConfigParser(
+        allow_no_value=True, strict=False, interpolation=None
+    )
     with open(dotconfig) as lines:
         lines = chain(("[top]",), lines)
         config.read_file(lines)
         return config["top"]
     return None
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Augments expunge list for oscheck')
-    parser.add_argument('filesystem', metavar='<filesystem name>', type=str,
-                        help='filesystem which was tested')
-    parser.add_argument('results', metavar='<directory with results>', type=str,
-                        help='directory with results file')
-    parser.add_argument('outputdir', metavar='<output directory>', type=str,
-                        help='The directory where to generate the expunge lists to')
-    parser.add_argument('--verbose', const=True, default=False, action="store_const",
-                        help='Print more verbose information')
+    parser = argparse.ArgumentParser(description="Augments expunge list for oscheck")
+    parser.add_argument(
+        "filesystem",
+        metavar="<filesystem name>",
+        type=str,
+        help="filesystem which was tested",
+    )
+    parser.add_argument(
+        "results",
+        metavar="<directory with results>",
+        type=str,
+        help="directory with results file",
+    )
+    parser.add_argument(
+        "outputdir",
+        metavar="<output directory>",
+        type=str,
+        help="The directory where to generate the expunge lists to",
+    )
+    parser.add_argument(
+        "--verbose",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Print more verbose information",
+    )
     args = parser.parse_args()
 
     expunge_kernel_dir = ""
 
-#    all_files = os.listdir(args.results)
-    dotconfig = top_dir + '/.config'
+    #    all_files = os.listdir(args.results)
+    dotconfig = top_dir + "/.config"
     config = get_config(dotconfig)
     if not config:
         sys.stdout.write("%s does not exist\n" % (dotconfig))
@@ -67,7 +92,7 @@ def main():
                 continue
             if not os.path.isfile(f):
                 continue
-            if f.endswith('.bad') or f.endswith('.dmesg'):
+            if f.endswith(".bad") or f.endswith(".dmesg"):
                 bad_files.append(f)
     for f in bad_files:
         if args.verbose:
@@ -78,15 +103,15 @@ def main():
         # where xxx are digits
         bad_file_list = f.split("/")
         bad_file_list_len = len(bad_file_list) - 1
-        bad_file =      bad_file_list[bad_file_list_len]
-        test_group =    bad_file_list[bad_file_list_len-1]
-        section =       bad_file_list[bad_file_list_len-2]
-        kernel =        bad_file_list[bad_file_list_len-3]
-        hostname =      bad_file_list[bad_file_list_len-4]
+        bad_file = bad_file_list[bad_file_list_len]
+        test_group = bad_file_list[bad_file_list_len - 1]
+        section = bad_file_list[bad_file_list_len - 2]
+        kernel = bad_file_list[bad_file_list_len - 3]
+        hostname = bad_file_list[bad_file_list_len - 4]
 
         if args.verbose:
             sys.stdout.write("%s\n" % bad_file_list)
-            sys.stdout.write("\tbad_file: %s\n" %bad_file)
+            sys.stdout.write("\tbad_file: %s\n" % bad_file)
             sys.stdout.write("\ttest_group: %s\n" % test_group)
             sys.stdout.write("\tsection: %s\n" % section)
             sys.stdout.write("\thostname: %s\n" % hostname)
@@ -94,20 +119,22 @@ def main():
         bad_file_parts = bad_file.split(".")
         bad_file_test_number = bad_file_parts[0]
         # This is like for example generic/xxx where xxx are digits
-        test_failure_line = test_group + '/' + bad_file_test_number
+        test_failure_line = test_group + "/" + bad_file_test_number
 
         # now to stuff this into expunge files such as:
         # path/4.19.17/xfs/unassigned/xfs_nocrc.txt
-        expunge_kernel_dir = args.outputdir + '/' + kernel + '/' + args.filesystem + '/'
-        output_dir = expunge_kernel_dir + 'unassigned/'
-        output_file = output_dir + section + '.txt'
+        expunge_kernel_dir = args.outputdir + "/" + kernel + "/" + args.filesystem + "/"
+        output_dir = expunge_kernel_dir + "unassigned/"
+        output_file = output_dir + section + ".txt"
 
         base_kernel = kernel
         if base_kernel.endswith("+"):
             base_kernel = kernel.replace("+", "")
-            base_expunge_kernel_dir = args.outputdir + '/' + base_kernel + '/' + args.filesystem + '/'
-            base_output_dir = base_expunge_kernel_dir + 'unassigned/'
-            base_output_file = base_output_dir + section + '.txt'
+            base_expunge_kernel_dir = (
+                args.outputdir + "/" + base_kernel + "/" + args.filesystem + "/"
+            )
+            base_output_dir = base_expunge_kernel_dir + "unassigned/"
+            base_output_file = base_output_dir + section + ".txt"
 
         shortcut_kernel_dir = None
         shortcut_dir = None
@@ -124,24 +151,39 @@ def main():
                 sles_release_name = sles_release_parts[0]
                 sles_release_parts = sles_release_name.split("sp")
                 if len(sles_release_parts) <= 1:
-                    sys.stderr.write("Unexpected sles_release_name: %s\n" % sles_release_name)
+                    sys.stderr.write(
+                        "Unexpected sles_release_name: %s\n" % sles_release_name
+                    )
                     sys.exit(1)
                 sles_point_release = sles_release_parts[0] + "." + sles_release_parts[1]
 
                 # This becomes generic release directory, not specific to any
                 # kernel.
-                shortcut_kernel_dir = args.outputdir + '/' + "sles/" + sles_point_release + '/' + args.filesystem + '/'
+                shortcut_kernel_dir = (
+                    args.outputdir
+                    + "/"
+                    + "sles/"
+                    + sles_point_release
+                    + "/"
+                    + args.filesystem
+                    + "/"
+                )
 
-                shortcut_dir = shortcut_kernel_dir + 'unassigned/'
-                shortcut_file = shortcut_dir + section + '.txt'
+                shortcut_dir = shortcut_kernel_dir + "unassigned/"
+                shortcut_file = shortcut_dir + section + ".txt"
             else:
                 ksplit = kernel.split(".")
                 shortcut_kernel = ksplit[0] + "." + ksplit[1] + "." + ksplit[2]
-                shortcut_kernel_dir = args.outputdir + '/' + shortcut_kernel + '/' + args.filesystem + '/'
-                shortcut_dir = shortcut_kernel_dir + 'unassigned/'
-                shortcut_file = shortcut_dir + section + '.txt'
+                shortcut_kernel_dir = (
+                    args.outputdir + "/" + shortcut_kernel + "/" + args.filesystem + "/"
+                )
+                shortcut_dir = shortcut_kernel_dir + "unassigned/"
+                shortcut_file = shortcut_dir + section + ".txt"
         elif is_config_bool_true(config, "CONFIG_LIBVIRT_OPENSUSE"):
-            if is_config_bool_true(config, "CONFIG_WORKFLOW_KOTD_ENABLE") and "leap" in hostname:
+            if (
+                is_config_bool_true(config, "CONFIG_WORKFLOW_KOTD_ENABLE")
+                and "leap" in hostname
+            ):
                 leap_host_parts = hostname.split("leap")
                 if len(leap_host_parts) <= 1:
                     sys.stderr.write("Invalid hostname: %s\n" % hostname)
@@ -150,22 +192,34 @@ def main():
                 leap_release_name = leap_release_parts[0]
                 leap_release_parts = leap_release_name.split("sp")
                 if len(leap_release_parts) <= 1:
-                    sys.stderr.write("Unexpected sles_release_name: %s\n" % leap_release_name)
+                    sys.stderr.write(
+                        "Unexpected sles_release_name: %s\n" % leap_release_name
+                    )
                     sys.exit(1)
                 leap_point_release = leap_release_parts[0] + "." + leap_release_parts[1]
 
                 # This becomes generic release directory, not specific to any
                 # kernel.
-                shortcut_kernel_dir = args.outputdir + '/' + "opensuse-leap/" + leap_point_release + '/' + args.filesystem + '/'
+                shortcut_kernel_dir = (
+                    args.outputdir
+                    + "/"
+                    + "opensuse-leap/"
+                    + leap_point_release
+                    + "/"
+                    + args.filesystem
+                    + "/"
+                )
 
-                shortcut_dir = shortcut_kernel_dir + 'unassigned/'
-                shortcut_file = shortcut_dir + section + '.txt'
+                shortcut_dir = shortcut_kernel_dir + "unassigned/"
+                shortcut_file = shortcut_dir + section + ".txt"
             else:
                 ksplit = kernel.split(".")
                 shortcut_kernel = ksplit[0] + "." + ksplit[1] + "." + ksplit[2]
-                shortcut_kernel_dir = args.outputdir + '/' + shortcut_kernel + '/' + args.filesystem + '/'
-                shortcut_dir = shortcut_kernel_dir + 'unassigned/'
-                shortcut_file = shortcut_dir + section + '.txt'
+                shortcut_kernel_dir = (
+                    args.outputdir + "/" + shortcut_kernel + "/" + args.filesystem + "/"
+                )
+                shortcut_dir = shortcut_kernel_dir + "unassigned/"
+                shortcut_file = shortcut_dir + section + ".txt"
 
         if not os.path.isdir(output_dir):
             if shortcut_dir and os.path.isdir(shortcut_dir):
@@ -173,7 +227,10 @@ def main():
                 output_file = shortcut_file
                 expunge_kernel_dir = shortcut_kernel_dir
             elif base_kernel != kernel and os.path.isdir(base_output_dir):
-                sys.stdout.write("<== expunges for %s not found but found base kernel %s expunge directory ==>\n" % (kernel, base_kernel))
+                sys.stdout.write(
+                    "<== expunges for %s not found but found base kernel %s expunge directory ==>\n"
+                    % (kernel, base_kernel)
+                )
                 expunge_kernel_dir = base_expunge_kernel_dir
                 output_dir = base_output_dir
                 output_file = base_output_file
@@ -182,10 +239,13 @@ def main():
                 os.makedirs(output_dir)
 
         if not os.path.isfile(output_file):
-            sys.stdout.write("====%s %s new failure found file was empty\n" % (section, test_failure_line))
+            sys.stdout.write(
+                "====%s %s new failure found file was empty\n"
+                % (section, test_failure_line)
+            )
             append_line(output_file, test_failure_line)
         else:
-            existing_file = open(output_file, 'r')
+            existing_file = open(output_file, "r")
             all_lines = existing_file.readlines()
             existing_file.close()
             found = False
@@ -194,13 +254,18 @@ def main():
                     found = True
                     break
             if not found:
-                sys.stdout.write("%s %s new failure found\n" % (section, test_failure_line))
+                sys.stdout.write(
+                    "%s %s new failure found\n" % (section, test_failure_line)
+                )
                 append_line(output_file, test_failure_line)
 
     if expunge_kernel_dir != "":
         sys.stdout.write("Sorting %s ...\n" % (expunge_kernel_dir))
-        sys.stdout.write("Running %s %s...\n" % (oscheck_sort_expunge, expunge_kernel_dir))
+        sys.stdout.write(
+            "Running %s %s...\n" % (oscheck_sort_expunge, expunge_kernel_dir)
+        )
         subprocess.call([oscheck_sort_expunge, expunge_kernel_dir])
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/fstests/bad_files_summary.py b/playbooks/python/workflows/fstests/bad_files_summary.py
index 164409b5..d0cf4058 100755
--- a/playbooks/python/workflows/fstests/bad_files_summary.py
+++ b/playbooks/python/workflows/fstests/bad_files_summary.py
@@ -11,6 +11,7 @@ import argparse
 import os
 import sys
 
+
 def parse_results_ascii(sections, results, kernel, filesystem):
     sys.stdout.write("%s on %s\n" % (filesystem, kernel))
     for section in sections:
@@ -18,6 +19,7 @@ def parse_results_ascii(sections, results, kernel, filesystem):
         for test in results[section]:
             sys.stdout.write("\t%s\n" % test)
 
+
 def parse_results_html(sections, results, kernel, filesystem):
     sys.stdout.write("<html><title>%s on %s</title><body>" % (filesystem, kernel))
     sys.stdout.write("<h1>%s on %s</h1>\n" % (filesystem, kernel))
@@ -33,15 +35,28 @@ def parse_results_html(sections, results, kernel, filesystem):
             sys.stdout.write("</tr>\n")
     sys.stdout.write("</table></body></html>")
 
+
 def main():
-    parser = argparse.ArgumentParser(description='generate html file from results')
-    parser.add_argument('filesystem', metavar='<filesystem name>', type=str,
-                        help='filesystem which was tested')
-    parser.add_argument('results', metavar='<directory with results>', type=str,
-                        help='directory with results file')
-    parser.add_argument('--format', metavar='<output format>', type=str,
-                        help='Output format: ascii html, the default is ascii',
-                        default='txt')
+    parser = argparse.ArgumentParser(description="generate html file from results")
+    parser.add_argument(
+        "filesystem",
+        metavar="<filesystem name>",
+        type=str,
+        help="filesystem which was tested",
+    )
+    parser.add_argument(
+        "results",
+        metavar="<directory with results>",
+        type=str,
+        help="directory with results file",
+    )
+    parser.add_argument(
+        "--format",
+        metavar="<output format>",
+        type=str,
+        help="Output format: ascii html, the default is ascii",
+        default="txt",
+    )
     args = parser.parse_args()
     results = dict()
     sections = list()
@@ -51,27 +66,27 @@ def main():
     for root, dirs, all_files in os.walk(args.results):
         for fname in all_files:
             f = os.path.join(root, fname)
-            #sys.stdout.write("%s\n" % f)
+            # sys.stdout.write("%s\n" % f)
             if os.path.isdir(f):
                 continue
             if not os.path.isfile(f):
                 continue
-            if not f.endswith('.bad'):
+            if not f.endswith(".bad"):
                 continue
 
             # f may be results/oscheck-xfs/4.19.0-4-amd64/xfs/generic/091.out.bad
             bad_file_list = f.split("/")
             bad_file_list_len = len(bad_file_list) - 1
             bad_file = bad_file_list[bad_file_list_len]
-            test_type = bad_file_list[bad_file_list_len-1]
-            section = bad_file_list[bad_file_list_len-2]
-            kernel = bad_file_list[bad_file_list_len-3]
+            test_type = bad_file_list[bad_file_list_len - 1]
+            section = bad_file_list[bad_file_list_len - 2]
+            kernel = bad_file_list[bad_file_list_len - 3]
 
             bad_file_parts = bad_file.split(".")
             bad_file_part_len = len(bad_file_parts) - 1
             bad_file_test_number = bad_file_parts[bad_file_part_len - 2]
             # This is like for example generic/091
-            test_failure_line = test_type + '/' + bad_file_test_number
+            test_failure_line = test_type + "/" + bad_file_test_number
 
             test_section = results.get(section)
             if not test_section:
@@ -86,5 +101,6 @@ def main():
     else:
         parse_results_ascii(sections, results, kernel, args.filesystem)
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/fstests/fstests-checktime-distribution.py b/playbooks/python/workflows/fstests/fstests-checktime-distribution.py
index b84e0a29..75025fa2 100755
--- a/playbooks/python/workflows/fstests/fstests-checktime-distribution.py
+++ b/playbooks/python/workflows/fstests/fstests-checktime-distribution.py
@@ -16,12 +16,21 @@ import subprocess
 import collections
 
 oscheck_ansible_python_dir = os.path.dirname(os.path.abspath(__file__))
-oscheck_sort_expunge = oscheck_ansible_python_dir + "/../../../scripts/workflows/fstests/sort-expunges.sh"
+oscheck_sort_expunge = (
+    oscheck_ansible_python_dir + "/../../../scripts/workflows/fstests/sort-expunges.sh"
+)
+
 
 def main():
-    parser = argparse.ArgumentParser(description='Creates check.time.distribution files for all found check.time files')
-    parser.add_argument('results', metavar='<directory with check.time files>', type=str,
-                        help='directory with check.time files')
+    parser = argparse.ArgumentParser(
+        description="Creates check.time.distribution files for all found check.time files"
+    )
+    parser.add_argument(
+        "results",
+        metavar="<directory with check.time files>",
+        type=str,
+        help="directory with check.time files",
+    )
     args = parser.parse_args()
 
     expunge_kernel_dir = ""
@@ -31,22 +40,22 @@ def main():
     for root, dirs, all_files in os.walk(args.results):
         for fname in all_files:
             f = os.path.join(root, fname)
-            #sys.stdout.write("%s\n" % f)
+            # sys.stdout.write("%s\n" % f)
             if os.path.isdir(f):
                 continue
             if not os.path.isfile(f):
                 continue
-            if not f.endswith('check.time'):
+            if not f.endswith("check.time"):
                 continue
 
             # f may be results/oscheck-xfs/4.19.0-4-amd64/check.time
-            time_distribution = f + '.distribution'
+            time_distribution = f + ".distribution"
 
             if os.path.isfile(time_distribution):
                 os.unlink(time_distribution)
 
-            checktime = open(f, 'r')
-            distribution = open(time_distribution, 'w')
+            checktime = open(f, "r")
+            distribution = open(time_distribution, "w")
 
             sys.stdout.write("checktime: %s\n" % f)
 
@@ -57,17 +66,17 @@ def main():
             num_tests = 0
             for line in all_lines:
                 line = line.strip()
-                m = re.match(r"^(?P<GROUP>\w+)/"
-                              "(?P<NUMBER>\d+)\s+"
-                              "(?P<TIME>\d+)$", line)
+                m = re.match(
+                    r"^(?P<GROUP>\w+)/" "(?P<NUMBER>\d+)\s+" "(?P<TIME>\d+)$", line
+                )
                 if not m:
                     continue
                 testline = m.groupdict()
                 num_tests += 1
-                if int(testline['TIME']) in results:
-                    results[int(testline['TIME'])] += 1
+                if int(testline["TIME"]) in results:
+                    results[int(testline["TIME"])] += 1
                 else:
-                    results[int(testline['TIME'])] = 1
+                    results[int(testline["TIME"])] = 1
             od = collections.OrderedDict(sorted(results.items()))
 
             v_total = 0
@@ -76,8 +85,11 @@ def main():
                 v_total += v
 
             if num_tests != v_total:
-                sys.stdout.write("Unexpected error, total tests: %d but computed sum test: %d\n" % (num_tests, v_total))
+                sys.stdout.write(
+                    "Unexpected error, total tests: %d but computed sum test: %d\n"
+                    % (num_tests, v_total)
+                )
 
 
-if __name__ == '__main__':
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/fstests/gen_results_summary.py b/playbooks/python/workflows/fstests/gen_results_summary.py
index 28dc064c..c0d0702b 100644
--- a/playbooks/python/workflows/fstests/gen_results_summary.py
+++ b/playbooks/python/workflows/fstests/gen_results_summary.py
@@ -20,22 +20,25 @@ import time
 from datetime import datetime
 from junitparser import JUnitXml, Property, Properties, Failure, Error, Skipped
 
+
 def get_results(dirroot, results_file):
     """Return a list of files named results_file in a directory hierarchy"""
     for dirpath, _dirs, filenames in os.walk(dirroot):
         if results_file in filenames:
-            yield dirpath + '/' + results_file
+            yield dirpath + "/" + results_file
+
 
 def parse_timestamp(timestamp):
     """Parse an ISO-8601-like timestamp as found in an xUnit file."""
     if timestamp == "":
         return 0
-    for fmt in ('%Y-%m-%dT%H:%M:%S%z', '%Y-%m-%dT%H:%M:%S'):
+    for fmt in ("%Y-%m-%dT%H:%M:%S%z", "%Y-%m-%dT%H:%M:%S"):
         try:
             return time.mktime(datetime.strptime(timestamp, fmt).timetuple())
         except ValueError:
             pass
-    raise ValueError('no valid timestamp format found')
+    raise ValueError("no valid timestamp format found")
+
 
 def failed_tests(testsuite):
     """This iterator the failed tests from the testsuite."""
@@ -43,6 +46,7 @@ def failed_tests(testsuite):
         if isinstance(testcase.result, Failure):
             yield testcase
 
+
 def get_property(props, key):
     """Return the value of the first property with the given name"""
     if props is None:
@@ -52,6 +56,7 @@ def get_property(props, key):
             return prop.value
     return None
 
+
 def get_properties(props, key):
     """An interator which returns values of properties with a given name."""
     if props is None:
@@ -60,6 +65,7 @@ def get_properties(props, key):
         if prop.name == key:
             yield prop.value
 
+
 def remove_properties(props, key):
     """Remove properties with a given name."""
     if props is None:
@@ -68,6 +74,7 @@ def remove_properties(props, key):
         if prop.name == key:
             props.remove(prop)
 
+
 def print_tests(out_f, testsuite, result_type, type_label):
     """Print all of the tests which match a particular result_type"""
     found = False
@@ -81,17 +88,18 @@ def print_tests(out_f, testsuite, result_type, type_label):
         if result is None:
             continue
         if not found:
-            out_f.write('  %s: ' % type_label)
+            out_f.write("  %s: " % type_label)
             pos = len(type_label) + 4
             found = True
         name_len = len(testcase.name) + 1
         pos += name_len + 1
         if pos > 76:
-            out_f.write('\n    ')
+            out_f.write("\n    ")
             pos = name_len + 5
-        out_f.write(testcase.name + ' ')
+        out_f.write(testcase.name + " ")
     if found:
-        out_f.write('\n')
+        out_f.write("\n")
+
 
 def total_tests(testsuites):
     """Print the total number of tests in an array of testsuites"""
@@ -101,6 +109,7 @@ def total_tests(testsuites):
             total += testsuite.tests
     return total
 
+
 def sum_testsuites(testsuites):
     """Summarize all of the test suite statistics"""
     runtime = 0
@@ -116,6 +125,7 @@ def sum_testsuites(testsuites):
         errors += testsuite.errors
     return (tests, skipped, failures, errors, runtime)
 
+
 def print_summary(out_f, testsuite, verbose, print_section):
     """Print a summary for a particular test suite
 
@@ -126,9 +136,9 @@ def print_summary(out_f, testsuite, verbose, print_section):
     ext4/bigalloc 244 tests, 25 skipped, 5 errors, 880 seconds
        generic/219 generic/235 generic/422 generic/451 generic/456
     """
-    cfg = get_property(testsuite.properties(), 'TESTCFG')
+    cfg = get_property(testsuite.properties(), "TESTCFG")
     if cfg is None:
-        cfg = get_property(testsuite.properties(), 'FSTESTCFG')
+        cfg = get_property(testsuite.properties(), "FSTESTCFG")
 
     runtime = testsuite.time
     tests = testsuite.tests
@@ -139,70 +149,74 @@ def print_summary(out_f, testsuite, verbose, print_section):
         for test_case in testsuite:
             classname = test_case.classname
             class_list = classname.split(".")
-            section = class_list[len(class_list)-1]
+            section = class_list[len(class_list) - 1]
             break
-        out_f.write('%s: %d tests, ' % (section, tests))
+        out_f.write("%s: %d tests, " % (section, tests))
     else:
-        out_f.write('%s: %d tests, ' % (cfg, tests))
+        out_f.write("%s: %d tests, " % (cfg, tests))
     if failures > 0:
-        out_f.write('%d failures, ' % failures)
+        out_f.write("%d failures, " % failures)
     if errors > 0:
-        out_f.write('%d errors, ' % errors)
+        out_f.write("%d errors, " % errors)
     if skipped > 0:
-        out_f.write('%d skipped, ' % skipped)
+        out_f.write("%d skipped, " % skipped)
     if runtime is None:
         runtime = 0
-    out_f.write('%d seconds\n' % runtime)
+    out_f.write("%d seconds\n" % runtime)
     if verbose:
         for test_case in testsuite:
-            status = 'Pass'
+            status = "Pass"
             for result in test_case.result:
                 if isinstance(result, Failure):
-                    status = 'Failed'
+                    status = "Failed"
                 if isinstance(result, Skipped):
-                    status = 'Skipped'
+                    status = "Skipped"
                 if isinstance(result, Error):
-                    status = 'Error'
-            out_f.write("  %-12s %-8s %ds\n" %
-                        (test_case.name, status, test_case.time))
+                    status = "Error"
+            out_f.write("  %-12s %-8s %ds\n" % (test_case.name, status, test_case.time))
     else:
         if failures > 0:
-            print_tests(out_f, testsuite, Failure, 'Failures')
+            print_tests(out_f, testsuite, Failure, "Failures")
         if errors > 0:
-            print_tests(out_f, testsuite, Error, 'Errors')
+            print_tests(out_f, testsuite, Error, "Errors")
+
 
 def print_property_line(out_f, props, key):
     """Print a line containing the given property."""
     value = get_property(props, key)
     if value is not None and value != "":
-        out_f.write('%-10s %s\n' % (key + ':', value))
+        out_f.write("%-10s %s\n" % (key + ":", value))
+
 
 def print_properties(out_f, props, key):
     """Print multiple property lines."""
     for value in get_properties(props, key):
-        out_f.write('%-10s %s\n' % (key + ':', value))
+        out_f.write("%-10s %s\n" % (key + ":", value))
+
 
 def print_header(out_f, props):
     """Print the header of the report."""
-    print_property_line(out_f, props, 'TESTRUNID')
-    print_property_line(out_f, props, 'KERNEL')
-    print_property_line(out_f, props, 'CMDLINE')
-    print_property_line(out_f, props, 'CPUS')
-    print_property_line(out_f, props, 'MEM')
-    print_property_line(out_f, props, 'MNTOPTS')
-    out_f.write('\n')
+    print_property_line(out_f, props, "TESTRUNID")
+    print_property_line(out_f, props, "KERNEL")
+    print_property_line(out_f, props, "CMDLINE")
+    print_property_line(out_f, props, "CPUS")
+    print_property_line(out_f, props, "MEM")
+    print_property_line(out_f, props, "MNTOPTS")
+    out_f.write("\n")
+
 
 def print_trailer(out_f, props):
     """Print the trailer of the report."""
-    out_f.write('\n')
-    print_property_line(out_f, props, 'FSTESTIMG')
-    print_property_line(out_f, props, 'FSTESTPRJ')
-    print_properties(out_f, props, 'FSTESTVER')
-    print_property_line(out_f, props, 'FSTESTCFG')
-    print_property_line(out_f, props, 'FSTESTSET')
-    print_property_line(out_f, props, 'FSTESTEXC')
-    print_property_line(out_f, props, 'FSTESTOPT')
-    print_property_line(out_f, props, 'GCE ID')
+    out_f.write("\n")
+    print_property_line(out_f, props, "FSTESTIMG")
+    print_property_line(out_f, props, "FSTESTPRJ")
+    print_properties(out_f, props, "FSTESTVER")
+    print_property_line(out_f, props, "FSTESTCFG")
+    print_property_line(out_f, props, "FSTESTSET")
+    print_property_line(out_f, props, "FSTESTEXC")
+    print_property_line(out_f, props, "FSTESTOPT")
+    print_property_line(out_f, props, "GCE ID")
+
 
 def check_for_ltm(results_dir, props):
     """Check to see if the results directory was created by the LTM and
@@ -210,15 +224,15 @@ def check_for_ltm(results_dir, props):
     mode.
     """
     try:
-        out_f = open(os.path.join(results_dir, 'ltm-run-stats'))
+        out_f = open(os.path.join(results_dir, "ltm-run-stats"))
         for line in out_f:
-            key, value = line.split(': ', 1)
-            value = value.rstrip('\n').strip('"')
+            key, value = line.split(": ", 1)
+            value = value.rstrip("\n").strip('"')
             remove_properties(props, key)
             props.add_property(Property(key, value))
         out_f.close()
-        remove_properties(props, 'GCE ID')
-        remove_properties(props, 'FSTESTCFG')
+        remove_properties(props, "GCE ID")
+        remove_properties(props, "FSTESTCFG")
         return True
     except IOError:
         try:
@@ -227,9 +241,15 @@ def check_for_ltm(results_dir, props):
             pass
         return False
 
-def gen_results_summary(results_dir, output_fn=None, merge_fn=None,
-                        verbose=False, print_section=False,
-                        results_file='results.xml'):
+
+def gen_results_summary(
+    results_dir,
+    output_fn=None,
+    merge_fn=None,
+    verbose=False,
+    print_section=False,
+    results_file="results.xml",
+):
     """Scan a results directory and generate a summary file"""
     reports = []
     combined = JUnitXml()
@@ -263,16 +283,18 @@ def gen_results_summary(results_dir, output_fn=None, merge_fn=None,
         combined.add_testsuite(testsuite)
         nr_files += 1
 
-    out_f.write('Totals: %d tests, %d skipped, %d failures, %d errors, %ds\n' \
-                % sum_testsuites(reports))
+    out_f.write(
+        "Totals: %d tests, %d skipped, %d failures, %d errors, %ds\n"
+        % sum_testsuites(reports)
+    )
 
     print_trailer(out_f, props)
 
     if merge_fn is not None:
         combined.update_statistics()
-        combined.write(merge_fn + '.new')
+        combined.write(merge_fn + ".new")
         if os.path.exists(merge_fn):
-            os.rename(merge_fn, merge_fn + '.bak')
-        os.rename(merge_fn + '.new', merge_fn)
+            os.rename(merge_fn, merge_fn + ".bak")
+        os.rename(merge_fn + ".new", merge_fn)
 
     return nr_files
diff --git a/playbooks/python/workflows/fstests/get_new_expunge_files.py b/playbooks/python/workflows/fstests/get_new_expunge_files.py
index a01330a2..5b164044 100755
--- a/playbooks/python/workflows/fstests/get_new_expunge_files.py
+++ b/playbooks/python/workflows/fstests/get_new_expunge_files.py
@@ -14,12 +14,23 @@ import sys
 import subprocess
 from lib import git
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Get list of expunge files not yet committed in git')
-    parser.add_argument('filesystem', metavar='<filesystem name>', type=str,
-                        help='filesystem which was tested')
-    parser.add_argument('expunge_dir', metavar='<directory with expunge files>', type=str,
-                        help='directory with expunge files')
+    parser = argparse.ArgumentParser(
+        description="Get list of expunge files not yet committed in git"
+    )
+    parser.add_argument(
+        "filesystem",
+        metavar="<filesystem name>",
+        type=str,
+        help="filesystem which was tested",
+    )
+    parser.add_argument(
+        "expunge_dir",
+        metavar="<directory with expunge files>",
+        type=str,
+        help="directory with expunge files",
+    )
     args = parser.parse_args()
 
     fs_expunge_dir = args.expunge_dir
@@ -41,5 +52,6 @@ def main():
                     short_file = f.split("../")[1]
                 sys.stdout.write("%s\n" % (short_file))
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/fstests/lib/git.py b/playbooks/python/workflows/fstests/lib/git.py
index 1884f565..fe8cf857 100644
--- a/playbooks/python/workflows/fstests/lib/git.py
+++ b/playbooks/python/workflows/fstests/lib/git.py
@@ -2,25 +2,36 @@
 
 import subprocess, os
 
+
 class GitError(Exception):
     pass
+
+
 class ExecutionError(GitError):
     def __init__(self, errcode):
         self.error_code = errcode
+
+
 class TimeoutExpired(GitError):
     def __init__(self, errcode):
         self.error_code = errcode
         return "timeout"
 
+
 def _check(process):
     if process.returncode != 0:
         raise ExecutionError(process.returncode)
 
+
 def is_new_file(file):
-    cmd = ['git', 'status', '-s', file ]
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
-                               close_fds=True, universal_newlines=True)
+    cmd = ["git", "status", "-s", file]
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=1)
@@ -31,6 +42,6 @@ def is_new_file(file):
         process.wait()
         if process.returncode != 0:
             return False
-        if stdout.startswith('??'):
+        if stdout.startswith("??"):
             return True
         return False
diff --git a/playbooks/python/workflows/fstests/xunit_merge_all.py b/playbooks/python/workflows/fstests/xunit_merge_all.py
index 6af6e981..d2d2b64c 100755
--- a/playbooks/python/workflows/fstests/xunit_merge_all.py
+++ b/playbooks/python/workflows/fstests/xunit_merge_all.py
@@ -11,6 +11,7 @@ import os
 import sys
 from junitparser import JUnitXml, TestSuite
 
+
 def get_test_suite(filename):
     try:
         ts = JUnitXml.fromfile(filename)
@@ -18,21 +19,31 @@ def get_test_suite(filename):
         sys.exit("Couldn't open %s: %s" % (filename, e[1]))
 
     if type(ts) != TestSuite:
-        sys.exit('%s is not a xUnit report file' % filename)
+        sys.exit("%s is not a xUnit report file" % filename)
     return ts
 
+
 def merge_ts(old_ts, new_ts):
     for tc in new_ts:
         old_ts.add_testcase(tc)
     old_ts.update_statistics()
     return old_ts
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Merge all xunit files into one')
-    parser.add_argument('results', metavar='<directory with results>', type=str,
-                        help='directory with results file')
-    parser.add_argument('outputfile', metavar='<output file>', type=str,
-                        help='The file to generate output to')
+    parser = argparse.ArgumentParser(description="Merge all xunit files into one")
+    parser.add_argument(
+        "results",
+        metavar="<directory with results>",
+        type=str,
+        help="directory with results file",
+    )
+    parser.add_argument(
+        "outputfile",
+        metavar="<output file>",
+        type=str,
+        help="The file to generate output to",
+    )
     args = parser.parse_args()
 
     all_xunit_ts = None
@@ -46,7 +57,7 @@ def main():
                 continue
             if not os.path.isfile(f):
                 continue
-            if not f.endswith('.xml'):
+            if not f.endswith(".xml"):
                 continue
 
             sys.stdout.write("Processing %s ...\n" % f)
@@ -60,7 +71,11 @@ def main():
 
     if all_xunit_ts:
         all_xunit_ts.write(args.outputfile)
-        sys.stdout.write("%s generated by merging all the above %d xunit files successfully\n" % (args.outputfile, num))
+        sys.stdout.write(
+            "%s generated by merging all the above %d xunit files successfully\n"
+            % (args.outputfile, num)
+        )
+
 
-if __name__ == '__main__':
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/sysbench/sysbench-tps-compare.py b/playbooks/python/workflows/sysbench/sysbench-tps-compare.py
index 8faecdcf..dafeb119 100755
--- a/playbooks/python/workflows/sysbench/sysbench-tps-compare.py
+++ b/playbooks/python/workflows/sysbench/sysbench-tps-compare.py
@@ -8,18 +8,20 @@ import re
 import argparse
 from concurrent.futures import ThreadPoolExecutor
 
+
 # Function to parse a line and extract time and TPS
 def parse_line(line):
-    match = re.search(r'\[\s*(\d+)s\s*\].*?tps:\s*([\d.]+)', line)
+    match = re.search(r"\[\s*(\d+)s\s*\].*?tps:\s*([\d.]+)", line)
     if match:
         time_in_seconds = int(match.group(1))
         tps = float(match.group(2))
         return time_in_seconds, tps
     return None
 
+
 # Function to read and parse sysbench output file
 def read_sysbench_output(file_path):
-    with open(file_path, 'r') as file:
+    with open(file_path, "r") as file:
         lines = file.readlines()
 
     with ThreadPoolExecutor() as executor:
@@ -27,23 +29,58 @@ def read_sysbench_output(file_path):
 
     return [result for result in results if result is not None]
 
+
 # Function to list available matplotlib themes
 def list_themes():
     print("Available matplotlib themes:")
     for style in plt.style.available:
         print(style)
 
+
 # Main function
 def main():
-    parser = argparse.ArgumentParser(description='Compare sysbench outputs.')
-    parser.add_argument('file1', type=str, nargs='?', default='sysbench_output_doublewrite.txt', help='First sysbench output file')
-    parser.add_argument('file2', type=str, nargs='?', default='sysbench_output_nodoublewrite.txt', help='Second sysbench output file')
-    parser.add_argument('--legend1', type=str, default='innodb_doublewrite=ON', help='Legend for the first file')
-    parser.add_argument('--legend2', type=str, default='innodb_doublewrite=OFF', help='Legend for the second file')
-    parser.add_argument('--theme', type=str, default='dark_background', help='Matplotlib theme to use')
-    parser.add_argument('--list-themes', action='store_true', help='List available matplotlib themes')
-    parser.add_argument('--output', type=str, default='a_vs_b.png', help='Path of file to save')
-    parser.add_argument('--report-interval', type=int, default=1, help='Time interval in seconds for reporting')
+    parser = argparse.ArgumentParser(description="Compare sysbench outputs.")
+    parser.add_argument(
+        "file1",
+        type=str,
+        nargs="?",
+        default="sysbench_output_doublewrite.txt",
+        help="First sysbench output file",
+    )
+    parser.add_argument(
+        "file2",
+        type=str,
+        nargs="?",
+        default="sysbench_output_nodoublewrite.txt",
+        help="Second sysbench output file",
+    )
+    parser.add_argument(
+        "--legend1",
+        type=str,
+        default="innodb_doublewrite=ON",
+        help="Legend for the first file",
+    )
+    parser.add_argument(
+        "--legend2",
+        type=str,
+        default="innodb_doublewrite=OFF",
+        help="Legend for the second file",
+    )
+    parser.add_argument(
+        "--theme", type=str, default="dark_background", help="Matplotlib theme to use"
+    )
+    parser.add_argument(
+        "--list-themes", action="store_true", help="List available matplotlib themes"
+    )
+    parser.add_argument(
+        "--output", type=str, default="a_vs_b.png", help="Path of file to save"
+    )
+    parser.add_argument(
+        "--report-interval",
+        type=int,
+        default=1,
+        help="Time interval in seconds for reporting",
+    )
 
     args = parser.parse_args()
 
@@ -62,37 +99,40 @@ def main():
     tps_data_2 = [(time * args.report_interval, tps) for time, tps in tps_data_2]
 
     # Determine the maximum time value to decide if we need to use hours or seconds
-    max_time_in_seconds = max(max(tps_data_1, key=lambda x: x[0])[0], max(tps_data_2, key=lambda x: x[0])[0])
+    max_time_in_seconds = max(
+        max(tps_data_1, key=lambda x: x[0])[0], max(tps_data_2, key=lambda x: x[0])[0]
+    )
     use_hours = max_time_in_seconds > 2 * 3600
 
     # Convert times if necessary
     if use_hours:
         tps_data_1 = [(time / 3600, tps) for time, tps in tps_data_1]
         tps_data_2 = [(time / 3600, tps) for time, tps in tps_data_2]
-        time_label = 'Time (hours)'
+        time_label = "Time (hours)"
     else:
-        time_label = 'Time (seconds)'
+        time_label = "Time (seconds)"
 
     # Create pandas DataFrames
-    df1 = pd.DataFrame(tps_data_1, columns=[time_label, 'TPS'])
-    df2 = pd.DataFrame(tps_data_2, columns=[time_label, 'TPS'])
+    df1 = pd.DataFrame(tps_data_1, columns=[time_label, "TPS"])
+    df2 = pd.DataFrame(tps_data_2, columns=[time_label, "TPS"])
 
     # Plot the TPS values
     plt.figure(figsize=(30, 12))
 
-    plt.plot(df1[time_label], df1['TPS'], 'ro', markersize=2, label=args.legend1)
-    plt.plot(df2[time_label], df2['TPS'], 'go', markersize=2, label=args.legend2)
+    plt.plot(df1[time_label], df1["TPS"], "ro", markersize=2, label=args.legend1)
+    plt.plot(df2[time_label], df2["TPS"], "go", markersize=2, label=args.legend2)
 
-    plt.title('Transactions Per Second (TPS) Over Time')
+    plt.title("Transactions Per Second (TPS) Over Time")
     plt.xlabel(time_label)
-    plt.ylabel('TPS')
+    plt.ylabel("TPS")
     plt.grid(True)
     # Try plotting without this to zoom in
     plt.ylim(0)
     plt.legend()
     plt.tight_layout()
     plt.savefig(args.output)
-    #plt.show()
+    # plt.show()
+
 
-if __name__ == '__main__':
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/sysbench/sysbench-tps-plot.py b/playbooks/python/workflows/sysbench/sysbench-tps-plot.py
index 5505ced8..ee56b1f5 100755
--- a/playbooks/python/workflows/sysbench/sysbench-tps-plot.py
+++ b/playbooks/python/workflows/sysbench/sysbench-tps-plot.py
@@ -9,27 +9,38 @@ import re
 import argparse
 from concurrent.futures import ThreadPoolExecutor
 
+
 # Function to parse a line and extract time and TPS from text
 def parse_line(line):
-    match = re.search(r'\[\s*(\d+)s\s*\].*?tps:\s*([\d.]+)', line)
+    match = re.search(r"\[\s*(\d+)s\s*\].*?tps:\s*([\d.]+)", line)
     if match:
         time_in_seconds = int(match.group(1))
         tps = float(match.group(2))
         return time_in_seconds, tps
     return None
 
+
 def main():
     # Setup argument parser
-    parser = argparse.ArgumentParser(description="Generate TPS plot from sysbench output")
-    parser.add_argument('input_file', type=str, help="Path to the input file (text or JSON)")
-    parser.add_argument('--output', type=str, default='tps_over_time.png', help="Output image file (default: tps_over_time.png)")
+    parser = argparse.ArgumentParser(
+        description="Generate TPS plot from sysbench output"
+    )
+    parser.add_argument(
+        "input_file", type=str, help="Path to the input file (text or JSON)"
+    )
+    parser.add_argument(
+        "--output",
+        type=str,
+        default="tps_over_time.png",
+        help="Output image file (default: tps_over_time.png)",
+    )
 
     # Parse arguments
     args = parser.parse_args()
 
     # Read the input file
     try:
-        with open(args.input_file, 'r') as file:
+        with open(args.input_file, "r") as file:
             # Read text lines and parse them concurrently
             lines = file.readlines()
             with ThreadPoolExecutor() as executor:
@@ -51,19 +62,19 @@ def main():
     # Convert times if necessary
     if use_hours:
         tps_data = [(time / 3600, tps) for time, tps in tps_data]
-        time_label = 'Time (hours)'
+        time_label = "Time (hours)"
     else:
-        time_label = 'Time (seconds)'
+        time_label = "Time (seconds)"
 
     # Create a pandas DataFrame
-    df = pd.DataFrame(tps_data, columns=[time_label, 'TPS'])
+    df = pd.DataFrame(tps_data, columns=[time_label, "TPS"])
 
     # Plot the TPS values
     plt.figure(figsize=(30, 12))
-    plt.plot(df[time_label], df['TPS'], 'o', markersize=2)
-    plt.title('Transactions Per Second (TPS) Over Time')
+    plt.plot(df[time_label], df["TPS"], "o", markersize=2)
+    plt.title("Transactions Per Second (TPS) Over Time")
     plt.xlabel(time_label)
-    plt.ylabel('TPS')
+    plt.ylabel("TPS")
     plt.grid(True)
     plt.ylim(0)
     plt.tight_layout()
@@ -72,5 +83,6 @@ def main():
     plt.savefig(args.output)
     print(f"TPS plot saved to {args.output}")
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/python/workflows/sysbench/sysbench-tps-variance.py b/playbooks/python/workflows/sysbench/sysbench-tps-variance.py
index 96971458..aee083b9 100755
--- a/playbooks/python/workflows/sysbench/sysbench-tps-variance.py
+++ b/playbooks/python/workflows/sysbench/sysbench-tps-variance.py
@@ -9,15 +9,17 @@ import seaborn as sns
 import argparse
 from scipy.stats import norm
 
+
 def extract_tps(filename):
     tps_values = []
-    with open(filename, 'r') as file:
+    with open(filename, "r") as file:
         for line in file:
-            match = re.search(r'tps: (\d+\.\d+)', line)
+            match = re.search(r"tps: (\d+\.\d+)", line)
             if match:
                 tps_values.append(float(match.group(1)))
     return tps_values
 
+
 def analyze_tps(tps_values):
     mean_tps = np.mean(tps_values)
     median_tps = np.median(tps_values)
@@ -25,28 +27,49 @@ def analyze_tps(tps_values):
     variance_tps = np.var(tps_values)
     return mean_tps, median_tps, std_tps, variance_tps
 
+
 def print_statistics(label, tps_values):
     mean_tps, median_tps, std_tps, variance_tps = analyze_tps(tps_values)
-    print(f'{label} Statistics:')
-    print(f'Mean TPS: {mean_tps:.2f}')
-    print(f'Median TPS: {median_tps:.2f}')
-    print(f'Standard Deviation of TPS: {std_tps:.2f}')
-    print(f'Variance of TPS: {variance_tps:.2f}\n')
+    print(f"{label} Statistics:")
+    print(f"Mean TPS: {mean_tps:.2f}")
+    print(f"Median TPS: {median_tps:.2f}")
+    print(f"Standard Deviation of TPS: {std_tps:.2f}")
+    print(f"Variance of TPS: {variance_tps:.2f}\n")
+
 
 def plot_histograms(tps_values1, tps_values2, legend1, legend2, color1, color2, outdir):
     plt.figure(figsize=(20, 12))
-    bins = np.linspace(min(min(tps_values1), min(tps_values2)), max(max(tps_values1), max(tps_values2)), 30)
-    plt.hist(tps_values1, bins=bins, alpha=0.5, color=color1, edgecolor='black', label=legend1)
+    bins = np.linspace(
+        min(min(tps_values1), min(tps_values2)),
+        max(max(tps_values1), max(tps_values2)),
+        30,
+    )
+    plt.hist(
+        tps_values1,
+        bins=bins,
+        alpha=0.5,
+        color=color1,
+        edgecolor="black",
+        label=legend1,
+    )
     if tps_values2:
-        plt.hist(tps_values2, bins=bins, alpha=0.5, color=color2, edgecolor='black', label=legend2)
-    plt.title('Distribution of TPS Values')
-    plt.xlabel('Transactions Per Second (TPS)')
-    plt.ylabel('Frequency')
-    plt.legend(loc='best')
+        plt.hist(
+            tps_values2,
+            bins=bins,
+            alpha=0.5,
+            color=color2,
+            edgecolor="black",
+            label=legend2,
+        )
+    plt.title("Distribution of TPS Values")
+    plt.xlabel("Transactions Per Second (TPS)")
+    plt.ylabel("Frequency")
+    plt.legend(loc="best")
     plt.grid(True)
-    plt.savefig(outdir + 'histogram.png')
+    plt.savefig(outdir + "histogram.png")
     plt.show()
 
+
 def plot_box_plots(tps_values1, tps_values2, legend1, legend2, color1, color2, outdir):
     data = []
     labels = []
@@ -58,112 +81,180 @@ def plot_box_plots(tps_values1, tps_values2, legend1, legend2, color1, color2, o
     plt.figure(figsize=(20, 12))
     box = plt.boxplot(data, labels=labels, patch_artist=True)
     colors = [color1, color2]
-    for patch, color in zip(box['boxes'], colors):
+    for patch, color in zip(box["boxes"], colors):
         patch.set_facecolor(color)
-    plt.title('Box Plot of TPS Values')
-    plt.ylabel('Transactions Per Second (TPS)')
+    plt.title("Box Plot of TPS Values")
+    plt.ylabel("Transactions Per Second (TPS)")
     plt.grid(True)
-    plt.savefig(outdir + 'box_plot.png')
+    plt.savefig(outdir + "box_plot.png")
     plt.show()
 
-def plot_density_plots(tps_values1, tps_values2, legend1, legend2, color1, color2, outdir):
+
+def plot_density_plots(
+    tps_values1, tps_values2, legend1, legend2, color1, color2, outdir
+):
     plt.figure(figsize=(20, 12))
     sns.kdeplot(tps_values1, fill=True, label=legend1, color=color1)
     if tps_values2:
         sns.kdeplot(tps_values2, fill=True, label=legend2, color=color2)
-    plt.title('Density Plot of TPS Values')
-    plt.xlabel('Transactions Per Second (TPS)')
-    plt.ylabel('Density')
-    plt.legend(loc='best')
+    plt.title("Density Plot of TPS Values")
+    plt.xlabel("Transactions Per Second (TPS)")
+    plt.ylabel("Density")
+    plt.legend(loc="best")
     plt.grid(True)
-    plt.savefig(outdir + 'density_plot.png')
+    plt.savefig(outdir + "density_plot.png")
     plt.show()
 
-def plot_combined_hist_density(tps_values1, tps_values2, legend1, legend2, color1, color2, outdir):
+
+def plot_combined_hist_density(
+    tps_values1, tps_values2, legend1, legend2, color1, color2, outdir
+):
     plt.figure(figsize=(20, 12))
-    bins = np.linspace(min(min(tps_values1), min(tps_values2)), max(max(tps_values1), max(tps_values2)), 30)
-    plt.hist(tps_values1, bins=bins, alpha=0.3, color=color1, edgecolor='black', label=f'Histogram {legend1}', density=True)
+    bins = np.linspace(
+        min(min(tps_values1), min(tps_values2)),
+        max(max(tps_values1), max(tps_values2)),
+        30,
+    )
+    plt.hist(
+        tps_values1,
+        bins=bins,
+        alpha=0.3,
+        color=color1,
+        edgecolor="black",
+        label=f"Histogram {legend1}",
+        density=True,
+    )
     if tps_values2:
-        plt.hist(tps_values2, bins=bins, alpha=0.3, color=color2, edgecolor='black', label=f'Histogram {legend2}', density=True)
-    sns.kdeplot(tps_values1, fill=False, label=f'Density {legend1}', color=color1)
+        plt.hist(
+            tps_values2,
+            bins=bins,
+            alpha=0.3,
+            color=color2,
+            edgecolor="black",
+            label=f"Histogram {legend2}",
+            density=True,
+        )
+    sns.kdeplot(tps_values1, fill=False, label=f"Density {legend1}", color=color1)
     if tps_values2:
-        sns.kdeplot(tps_values2, fill=False, label=f'Density {legend2}', color=color2)
+        sns.kdeplot(tps_values2, fill=False, label=f"Density {legend2}", color=color2)
 
     mean1, std1 = np.mean(tps_values1), np.std(tps_values1)
     ax2 = plt.gca().twinx()
-    ax2.set_ylabel('Density')
-    ax2.axvline(mean1, color=color1, linestyle='dotted', linewidth=2)
-    ax2.axvline(mean1 - std1, color=color1, linestyle='dotted', linewidth=1)
-    ax2.axvline(mean1 + std1, color=color1, linestyle='dotted', linewidth=1)
+    ax2.set_ylabel("Density")
+    ax2.axvline(mean1, color=color1, linestyle="dotted", linewidth=2)
+    ax2.axvline(mean1 - std1, color=color1, linestyle="dotted", linewidth=1)
+    ax2.axvline(mean1 + std1, color=color1, linestyle="dotted", linewidth=1)
     if tps_values2:
         mean2, std2 = np.mean(tps_values2), np.std(tps_values2)
-        ax2.axvline(mean2, color=color2, linestyle='dotted', linewidth=2)
-        ax2.axvline(mean2 - std2, color=color2, linestyle='dotted', linewidth=1)
-        ax2.axvline(mean2 + std2, color=color2, linestyle='dotted', linewidth=1)
-
-    plt.title('Combined Histogram and Density Plot of TPS Values')
-    plt.xlabel('Transactions Per Second (TPS)')
-    plt.ylabel('Frequency/Density')
-    plt.legend(loc='best')
+        ax2.axvline(mean2, color=color2, linestyle="dotted", linewidth=2)
+        ax2.axvline(mean2 - std2, color=color2, linestyle="dotted", linewidth=1)
+        ax2.axvline(mean2 + std2, color=color2, linestyle="dotted", linewidth=1)
+
+    plt.title("Combined Histogram and Density Plot of TPS Values")
+    plt.xlabel("Transactions Per Second (TPS)")
+    plt.ylabel("Frequency/Density")
+    plt.legend(loc="best")
     plt.grid(True)
-    plt.savefig(outdir + 'combined_hist_density.png')
+    plt.savefig(outdir + "combined_hist_density.png")
     plt.show()
 
+
 def plot_bell_curve(tps_values1, tps_values2, legend1, legend2, color1, color2, outdir):
     plt.figure(figsize=(20, 12))
     mean1, std1 = np.mean(tps_values1), np.std(tps_values1)
-    x1 = np.linspace(mean1 - 3*std1, mean1 + 3*std1, 100)
-    plt.plot(x1, norm.pdf(x1, mean1, std1) * 100, label=f'Bell Curve {legend1}', color=color1)  # Multiplying by 100 for percentage
+    x1 = np.linspace(mean1 - 3 * std1, mean1 + 3 * std1, 100)
+    plt.plot(
+        x1, norm.pdf(x1, mean1, std1) * 100, label=f"Bell Curve {legend1}", color=color1
+    )  # Multiplying by 100 for percentage
 
     if tps_values2:
         mean2, std2 = np.mean(tps_values2), np.std(tps_values2)
-        x2 = np.linspace(mean2 - 3*std2, mean2 + 3*std2, 100)
-        plt.plot(x2, norm.pdf(x2, mean2, std2) * 100, label=f'Bell Curve {legend2}', color=color2)  # Multiplying by 100 for percentage
-
-    plt.title('Bell Curve (Normal Distribution) of TPS Values')
-    plt.xlabel('Transactions Per Second (TPS)')
-    plt.ylabel('Probability Density (%)')
-    plt.legend(loc='best')
+        x2 = np.linspace(mean2 - 3 * std2, mean2 + 3 * std2, 100)
+        plt.plot(
+            x2,
+            norm.pdf(x2, mean2, std2) * 100,
+            label=f"Bell Curve {legend2}",
+            color=color2,
+        )  # Multiplying by 100 for percentage
+
+    plt.title("Bell Curve (Normal Distribution) of TPS Values")
+    plt.xlabel("Transactions Per Second (TPS)")
+    plt.ylabel("Probability Density (%)")
+    plt.legend(loc="best")
     plt.grid(True)
-    plt.savefig(outdir + 'bell_curve.png')
+    plt.savefig(outdir + "bell_curve.png")
     plt.show()
 
-def plot_combined_hist_bell_curve(tps_values1, tps_values2, legend1, legend2, color1, color2, outdir):
+
+def plot_combined_hist_bell_curve(
+    tps_values1, tps_values2, legend1, legend2, color1, color2, outdir
+):
     fig, ax1 = plt.subplots(figsize=(20, 12))
 
-    bins = np.linspace(min(min(tps_values1), min(tps_values2)), max(max(tps_values1), max(tps_values2)), 30)
-    ax1.hist(tps_values1, bins=bins, alpha=0.5, color=color1, edgecolor='black', label=legend1)
+    bins = np.linspace(
+        min(min(tps_values1), min(tps_values2)),
+        max(max(tps_values1), max(tps_values2)),
+        30,
+    )
+    ax1.hist(
+        tps_values1,
+        bins=bins,
+        alpha=0.5,
+        color=color1,
+        edgecolor="black",
+        label=legend1,
+    )
     if tps_values2:
-        ax1.hist(tps_values2, bins=bins, alpha=0.5, color=color2, edgecolor='black', label=legend2)
-
-    ax1.set_xlabel('Transactions Per Second (TPS)')
-    ax1.set_ylabel('Frequency')
-    ax1.legend(loc='upper left')
+        ax1.hist(
+            tps_values2,
+            bins=bins,
+            alpha=0.5,
+            color=color2,
+            edgecolor="black",
+            label=legend2,
+        )
+
+    ax1.set_xlabel("Transactions Per Second (TPS)")
+    ax1.set_ylabel("Frequency")
+    ax1.legend(loc="upper left")
     ax1.grid(True)
 
     ax2 = ax1.twinx()
     mean1, std1 = np.mean(tps_values1), np.std(tps_values1)
-    x1 = np.linspace(mean1 - 3*std1, mean1 + 3*std1, 100)
-    ax2.plot(x1, norm.pdf(x1, mean1, std1) * 100, label=f'Bell Curve {legend1}', color=color1, linestyle='dashed')
-    ax2.axvline(mean1, color=color1, linestyle='dotted', linewidth=2)
-    ax2.axvline(mean1 - std1, color=color1, linestyle='dotted', linewidth=1)
-    ax2.axvline(mean1 + std1, color=color1, linestyle='dotted', linewidth=1)
+    x1 = np.linspace(mean1 - 3 * std1, mean1 + 3 * std1, 100)
+    ax2.plot(
+        x1,
+        norm.pdf(x1, mean1, std1) * 100,
+        label=f"Bell Curve {legend1}",
+        color=color1,
+        linestyle="dashed",
+    )
+    ax2.axvline(mean1, color=color1, linestyle="dotted", linewidth=2)
+    ax2.axvline(mean1 - std1, color=color1, linestyle="dotted", linewidth=1)
+    ax2.axvline(mean1 + std1, color=color1, linestyle="dotted", linewidth=1)
 
     if tps_values2:
         mean2, std2 = np.mean(tps_values2), np.std(tps_values2)
-        x2 = np.linspace(mean2 - 3*std2, mean2 + 3*std2, 100)
-        ax2.plot(x2, norm.pdf(x2, mean2, std2) * 100, label=f'Bell Curve {legend2}', color=color2, linestyle='dashed')
-        ax2.axvline(mean2, color=color2, linestyle='dotted', linewidth=2)
-        ax2.axvline(mean2 - std2, color=color2, linestyle='dotted', linewidth=1)
-        ax2.axvline(mean2 + std2, color=color2, linestyle='dotted', linewidth=1)
-
-    ax2.set_ylabel('Probability Density (%)')
-    ax2.legend(loc='upper center')
-
-    plt.title('Combined Histogram and Bell Curve of TPS Values')
-    plt.savefig(outdir + 'combined_hist_bell_curve.png')
+        x2 = np.linspace(mean2 - 3 * std2, mean2 + 3 * std2, 100)
+        ax2.plot(
+            x2,
+            norm.pdf(x2, mean2, std2) * 100,
+            label=f"Bell Curve {legend2}",
+            color=color2,
+            linestyle="dashed",
+        )
+        ax2.axvline(mean2, color=color2, linestyle="dotted", linewidth=2)
+        ax2.axvline(mean2 - std2, color=color2, linestyle="dotted", linewidth=1)
+        ax2.axvline(mean2 + std2, color=color2, linestyle="dotted", linewidth=1)
+
+    ax2.set_ylabel("Probability Density (%)")
+    ax2.legend(loc="upper center")
+
+    plt.title("Combined Histogram and Bell Curve of TPS Values")
+    plt.savefig(outdir + "combined_hist_bell_curve.png")
     plt.show()
 
+
 def plot_variance_bars(variance1, variance2, legend1, legend2, color1, color2, outdir):
     fig, ax1 = plt.subplots(figsize=(20, 12))
 
@@ -173,24 +264,39 @@ def plot_variance_bars(variance1, variance2, legend1, legend2, color1, color2, o
 
     bars = plt.bar(labels, variances, color=colors)
     for bar, variance in zip(bars, variances):
-        plt.text(bar.get_x() + bar.get_width() / 2, bar.get_height(), f'{variance:.2f}', ha='center', va='bottom')
+        plt.text(
+            bar.get_x() + bar.get_width() / 2,
+            bar.get_height(),
+            f"{variance:.2f}",
+            ha="center",
+            va="bottom",
+        )
 
     # Calculate the factor by which the larger variance is greater than the smaller variance
     if variance1 != 0 and variance2 != 0:
         factor = max(variance1, variance2) / min(variance1, variance2)
-        factor_text = f'Variance Factor: {factor:.2f}'
-        plt.text(1, max(variances) * 1.05, factor_text, ha='center', va='bottom', fontsize=12, color='white')
-
-    plt.title('Variance of TPS Values')
-    plt.ylabel('Variance')
+        factor_text = f"Variance Factor: {factor:.2f}"
+        plt.text(
+            1,
+            max(variances) * 1.05,
+            factor_text,
+            ha="center",
+            va="bottom",
+            fontsize=12,
+            color="white",
+        )
+
+    plt.title("Variance of TPS Values")
+    plt.ylabel("Variance")
 
     # Add lollipop marker
     for bar, variance in zip(bars, variances):
-        plt.plot(bar.get_x() + bar.get_width() / 2, variance, 'o', color='black')
+        plt.plot(bar.get_x() + bar.get_width() / 2, variance, "o", color="black")
 
-    plt.savefig(outdir + 'variance_bar.png')
+    plt.savefig(outdir + "variance_bar.png")
     plt.show()
 
+
 def plot_outliers(tps_values1, tps_values2, legend1, legend2, color1, color2, outdir):
     data = [tps_values1]
     labels = [legend1]
@@ -202,41 +308,69 @@ def plot_outliers(tps_values1, tps_values2, legend1, legend2, color1, color2, ou
         colors.append(color2)
 
     fig, ax = plt.subplots(figsize=(20, 12))
-    box = ax.boxplot(data, labels=labels, patch_artist=True, showfliers=True,
-                     whiskerprops=dict(color='white', linewidth=2),
-                     capprops=dict(color='white', linewidth=2),
-                     medianprops=dict(color='yellow', linewidth=2))
+    box = ax.boxplot(
+        data,
+        labels=labels,
+        patch_artist=True,
+        showfliers=True,
+        whiskerprops=dict(color="white", linewidth=2),
+        capprops=dict(color="white", linewidth=2),
+        medianprops=dict(color="yellow", linewidth=2),
+    )
 
     # Color the boxes
-    for patch, color in zip(box['boxes'], colors):
+    for patch, color in zip(box["boxes"], colors):
         patch.set_facecolor(color)
 
     # Scatter plot for the actual points
     for i, (d, color) in enumerate(zip(data, colors)):
         y = d
         # Adding jitter to the x-axis for better visibility
-        x = np.random.normal(i + 1, 0.04, size=len(y))  # Adding some jitter to the x-axis
-        ax.scatter(x, y, alpha=0.6, color=color, edgecolor='black')
+        x = np.random.normal(
+            i + 1, 0.04, size=len(y)
+        )  # Adding some jitter to the x-axis
+        ax.scatter(x, y, alpha=0.6, color=color, edgecolor="black")
 
-    plt.title('Outliers in TPS Values')
-    plt.ylabel('Transactions Per Second (TPS)')
+    plt.title("Outliers in TPS Values")
+    plt.ylabel("Transactions Per Second (TPS)")
     plt.grid(True)
-    plt.savefig(outdir + 'outliers_plot.png')
+    plt.savefig(outdir + "outliers_plot.png")
     plt.show()
 
+
 def main():
-    parser = argparse.ArgumentParser(description='Analyze and compare TPS values from sysbench output files.')
-    parser.add_argument('file1', help='First TPS file')
-    parser.add_argument('--legend1', type=str, default='innodb_doublewrite=ON', help='Legend for the first file')
-    parser.add_argument('file2', nargs='?', default=None, help='Second TPS file (optional)')
-    parser.add_argument('--legend2', type=str, default='innodb_doublewrite=OFF', help='Legend for the second file')
-    parser.add_argument('--dir', type=str, default='./', help='Path to place images')
-    parser.add_argument('--color1', default='cyan', help='Color for the first dataset (default: cyan)')
-    parser.add_argument('--color2', default='orange', help='Color for the second dataset (default: orange)')
+    parser = argparse.ArgumentParser(
+        description="Analyze and compare TPS values from sysbench output files."
+    )
+    parser.add_argument("file1", help="First TPS file")
+    parser.add_argument(
+        "--legend1",
+        type=str,
+        default="innodb_doublewrite=ON",
+        help="Legend for the first file",
+    )
+    parser.add_argument(
+        "file2", nargs="?", default=None, help="Second TPS file (optional)"
+    )
+    parser.add_argument(
+        "--legend2",
+        type=str,
+        default="innodb_doublewrite=OFF",
+        help="Legend for the second file",
+    )
+    parser.add_argument("--dir", type=str, default="./", help="Path to place images")
+    parser.add_argument(
+        "--color1", default="cyan", help="Color for the first dataset (default: cyan)"
+    )
+    parser.add_argument(
+        "--color2",
+        default="orange",
+        help="Color for the second dataset (default: orange)",
+    )
 
     args = parser.parse_args()
 
-    plt.style.use('dark_background')  # Set the dark theme
+    plt.style.use("dark_background")  # Set the dark theme
 
     tps_values1 = extract_tps(args.file1)
     tps_values2 = extract_tps(args.file2) if args.file2 else None
@@ -245,22 +379,89 @@ def main():
     if tps_values2:
         print_statistics(args.legend2, tps_values2)
 
-    plot_histograms(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2, args.dir)
-    plot_box_plots(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2, args.dir)
-    plot_density_plots(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2, args.dir)
-    plot_combined_hist_density(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2, args.dir)
-    plot_bell_curve(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2, args.dir)
-    plot_combined_hist_bell_curve(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2, args.dir)
+    plot_histograms(
+        tps_values1,
+        tps_values2,
+        args.legend1,
+        args.legend2 if args.legend2 else "",
+        args.color1,
+        args.color2,
+        args.dir,
+    )
+    plot_box_plots(
+        tps_values1,
+        tps_values2,
+        args.legend1,
+        args.legend2 if args.legend2 else "",
+        args.color1,
+        args.color2,
+        args.dir,
+    )
+    plot_density_plots(
+        tps_values1,
+        tps_values2,
+        args.legend1,
+        args.legend2 if args.legend2 else "",
+        args.color1,
+        args.color2,
+        args.dir,
+    )
+    plot_combined_hist_density(
+        tps_values1,
+        tps_values2,
+        args.legend1,
+        args.legend2 if args.legend2 else "",
+        args.color1,
+        args.color2,
+        args.dir,
+    )
+    plot_bell_curve(
+        tps_values1,
+        tps_values2,
+        args.legend1,
+        args.legend2 if args.legend2 else "",
+        args.color1,
+        args.color2,
+        args.dir,
+    )
+    plot_combined_hist_bell_curve(
+        tps_values1,
+        tps_values2,
+        args.legend1,
+        args.legend2 if args.legend2 else "",
+        args.color1,
+        args.color2,
+        args.dir,
+    )
 
     # Plot variance bars
     _, _, _, variance1 = analyze_tps(tps_values1)
     if tps_values2:
         _, _, _, variance2 = analyze_tps(tps_values2)
-        plot_variance_bars(variance1, variance2, args.legend1, args.legend2, args.color1, args.color2, args.dir)
+        plot_variance_bars(
+            variance1,
+            variance2,
+            args.legend1,
+            args.legend2,
+            args.color1,
+            args.color2,
+            args.dir,
+        )
     else:
-        plot_variance_bars(variance1, 0, args.legend1, '', args.color1, 'black', args.dir)  # Use black for the second bar if there's only one dataset
-
-    plot_outliers(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2, args.dir)
-
-if __name__ == '__main__':
+        plot_variance_bars(
+            variance1, 0, args.legend1, "", args.color1, "black", args.dir
+        )  # Use black for the second bar if there's only one dataset
+
+    plot_outliers(
+        tps_values1,
+        tps_values2,
+        args.legend1,
+        args.legend2 if args.legend2 else "",
+        args.color1,
+        args.color2,
+        args.dir,
+    )
+
+
+if __name__ == "__main__":
     main()
diff --git a/playbooks/roles/gen_nodes/python/gen_pcie_passthrough_guestfs_xml.py b/playbooks/roles/gen_nodes/python/gen_pcie_passthrough_guestfs_xml.py
index a5abdf0f..076fe583 100755
--- a/playbooks/roles/gen_nodes/python/gen_pcie_passthrough_guestfs_xml.py
+++ b/playbooks/roles/gen_nodes/python/gen_pcie_passthrough_guestfs_xml.py
@@ -24,11 +24,12 @@ pcie_hotplug_template = """<!-- PCIE passthrough device -->
 <!-- End of PCIE passthrough device -->
 """
 
+
 def main():
-    topdir = os.environ.get('TOPDIR', '.')
+    topdir = os.environ.get("TOPDIR", ".")
 
     # load extra_vars
-    with open(f'{topdir}/extra_vars.yaml') as stream:
+    with open(f"{topdir}/extra_vars.yaml") as stream:
         extra_vars = yaml.safe_load(stream)
 
     yaml_nodes_file = f'{topdir}/{extra_vars["kdevops_nodes"]}'
@@ -38,36 +39,48 @@ def main():
         nodes = yaml.safe_load(stream)
 
     # add pcie devices
-    for node in nodes['guestfs_nodes']:
-        name = node['name']
-        pcipassthrough = node.get('pcipassthrough')
+    for node in nodes["guestfs_nodes"]:
+        name = node["name"]
+        pcipassthrough = node.get("pcipassthrough")
         if not pcipassthrough:
             continue
         for dev_key_name in pcipassthrough:
             dev = pcipassthrough.get(dev_key_name)
             dev_keys = list(dev.keys())
-            if 'domain' not in dev_keys or 'bus' not in dev_keys or 'slot' not in dev_keys or 'function' not in dev_keys:
-                raise Exception(f"Missing pcie attributes for device %s in %s" %
-                                (dev_key_name, yaml_nodes_file))
-            domain = hex(dev.get('domain'))
-            bus = hex(dev.get('bus'))
-            slot = hex(dev.get('slot'))
-            function = hex(dev.get('function'))
+            if (
+                "domain" not in dev_keys
+                or "bus" not in dev_keys
+                or "slot" not in dev_keys
+                or "function" not in dev_keys
+            ):
+                raise Exception(
+                    f"Missing pcie attributes for device %s in %s"
+                    % (dev_key_name, yaml_nodes_file)
+                )
+            domain = hex(dev.get("domain"))
+            bus = hex(dev.get("bus"))
+            slot = hex(dev.get("slot"))
+            function = hex(dev.get("function"))
 
-            pcie_xml = f"{extra_vars['guestfs_path']}/{name}/pcie_passthrough_" + dev_key_name + ".xml"
+            pcie_xml = (
+                f"{extra_vars['guestfs_path']}/{name}/pcie_passthrough_"
+                + dev_key_name
+                + ".xml"
+            )
 
             if os.path.exists(pcie_xml):
                 os.remove(pcie_xml)
 
-            device_xml = open(pcie_xml, 'w')
+            device_xml = open(pcie_xml, "w")
             context = {
-                "domain" : domain,
-                "bus" : bus,
-                "slot" : slot,
-                "function" : function,
+                "domain": domain,
+                "bus": bus,
+                "slot": slot,
+                "function": function,
             }
             device_xml.write(pcie_hotplug_template.format(**context))
             device_xml.close()
 
+
 if __name__ == "__main__":
     main()
diff --git a/playbooks/roles/linux-mirror/python/gen-mirror-files.py b/playbooks/roles/linux-mirror/python/gen-mirror-files.py
index 65fc909d..c43b34a8 100755
--- a/playbooks/roles/linux-mirror/python/gen-mirror-files.py
+++ b/playbooks/roles/linux-mirror/python/gen-mirror-files.py
@@ -13,9 +13,9 @@ import time
 import os
 from pathlib import Path
 
-topdir = os.environ.get('TOPDIR', '.')
+topdir = os.environ.get("TOPDIR", ".")
 yaml_dir = topdir + "/playbooks/roles/linux-mirror/linux-mirror-systemd/"
-default_mirrors_yaml = yaml_dir + 'mirrors.yaml'
+default_mirrors_yaml = yaml_dir + "mirrors.yaml"
 
 service_template = """[Unit]
 Description={short_name} mirror [{target}]
@@ -44,19 +44,37 @@ OnUnitInactiveSec={refresh}
 WantedBy=default.target
 """
 
+
 def main():
-    parser = argparse.ArgumentParser(description='gen-mirror-files')
-    parser.add_argument('--yaml-mirror', metavar='<yaml_mirror>', type=str,
-                        default=default_mirrors_yaml,
-                        help='The yaml mirror input file.')
-    parser.add_argument('--verbose', const=True, default=False, action="store_const",
-                        help='Be verbose on otput.')
-    parser.add_argument('--refresh', metavar='<refresh>', type=str,
-                        default='360m',
-                        help='How often to update the git tree.')
-    parser.add_argument('--refresh-on-boot', metavar='<refresh>', type=str,
-                        default='10m',
-                        help='How long to wait on boot to update the git tree.')
+    parser = argparse.ArgumentParser(description="gen-mirror-files")
+    parser.add_argument(
+        "--yaml-mirror",
+        metavar="<yaml_mirror>",
+        type=str,
+        default=default_mirrors_yaml,
+        help="The yaml mirror input file.",
+    )
+    parser.add_argument(
+        "--verbose",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Be verbose on otput.",
+    )
+    parser.add_argument(
+        "--refresh",
+        metavar="<refresh>",
+        type=str,
+        default="360m",
+        help="How often to update the git tree.",
+    )
+    parser.add_argument(
+        "--refresh-on-boot",
+        metavar="<refresh>",
+        type=str,
+        default="10m",
+        help="How long to wait on boot to update the git tree.",
+    )
     args = parser.parse_args()
 
     if not os.path.isfile(args.yaml_mirror):
@@ -64,86 +82,95 @@ def main():
         sys.exit(1)
 
     # load the yaml input file
-    with open(f'{args.yaml_mirror}') as stream:
+    with open(f"{args.yaml_mirror}") as stream:
         yaml_vars = yaml.safe_load(stream)
 
-    if yaml_vars.get('mirrors') is None:
-        raise Exception(f"Missing mirrors descriptions on %s" %
-                        (args.yaml_mirror))
+    if yaml_vars.get("mirrors") is None:
+        raise Exception(f"Missing mirrors descriptions on %s" % (args.yaml_mirror))
 
-    if (args.verbose):
+    if args.verbose:
         sys.stdout.write("Yaml mirror input: %s\n\n" % args.yaml_mirror)
 
     total = 0
-    for mirror in yaml_vars['mirrors']:
+    for mirror in yaml_vars["mirrors"]:
         total = total + 1
 
-        if mirror.get('short_name') is None:
-            raise Exception(f"Missing required short_name on mirror item #%d on file: %s" % (total, args.yaml_mirror))
-        if mirror.get('url') is None:
-            raise Exception(f"Missing required url on mirror item #%d on file: %s" % (total, args.yaml_mirror))
-        if mirror.get('target') is None:
-            raise Exception(f"Missing required target on mirror item #%d on file: %s" % (total, args.yaml_mirror))
-
-        short_name = mirror['short_name'].replace("/", "-")
-        url = mirror['url']
-        target = mirror['target']
+        if mirror.get("short_name") is None:
+            raise Exception(
+                f"Missing required short_name on mirror item #%d on file: %s"
+                % (total, args.yaml_mirror)
+            )
+        if mirror.get("url") is None:
+            raise Exception(
+                f"Missing required url on mirror item #%d on file: %s"
+                % (total, args.yaml_mirror)
+            )
+        if mirror.get("target") is None:
+            raise Exception(
+                f"Missing required target on mirror item #%d on file: %s"
+                % (total, args.yaml_mirror)
+            )
+
+        short_name = mirror["short_name"].replace("/", "-")
+        url = mirror["url"]
+        target = mirror["target"]
 
         service_file = f"{yaml_dir}" + short_name + "-mirror" + ".service"
         timer_file = f"{yaml_dir}" + short_name + "-mirror" + ".timer"
 
         refresh = args.refresh
-        if mirror.get('refresh'):
-            refresh = mirror.get('refresh')
+        if mirror.get("refresh"):
+            refresh = mirror.get("refresh")
         refresh_on_boot = args.refresh_on_boot
-        if mirror.get('refresh_on_boot'):
-            refresh = mirror.get('refresh_on_boot')
+        if mirror.get("refresh_on_boot"):
+            refresh = mirror.get("refresh_on_boot")
 
-        if (args.verbose):
+        if args.verbose:
             sys.stdout.write("Mirror #%d\n" % total)
-            sys.stdout.write("\tshort_name: %s\n" % (mirror['short_name']))
-            sys.stdout.write("\turl: %s\n" % (mirror['short_name']))
-            sys.stdout.write("\ttarget: %s\n" % (mirror['short_name']))
+            sys.stdout.write("\tshort_name: %s\n" % (mirror["short_name"]))
+            sys.stdout.write("\turl: %s\n" % (mirror["short_name"]))
+            sys.stdout.write("\ttarget: %s\n" % (mirror["short_name"]))
             sys.stdout.write("\tservice: %s\n" % (service_file))
             sys.stdout.write("\ttimer: %s\n" % (timer_file))
             sys.stdout.write("\trefresh: %s\n" % (refresh))
             sys.stdout.write("\trefresh_on_boot: %s\n" % (refresh_on_boot))
 
         if os.path.exists(service_file):
-            if (args.verbose):
+            if args.verbose:
                 sys.stdout.write("\toverwrite_service: True\n")
             os.remove(service_file)
         else:
-            if (args.verbose):
+            if args.verbose:
                 sys.stdout.write("\toverwrite_service: False\n")
 
-        output_service = open(service_file, 'w')
+        output_service = open(service_file, "w")
         context = {
-            "short_name" : short_name,
-            "url" : url,
-            "target" : target,
+            "short_name": short_name,
+            "url": url,
+            "target": target,
         }
         output_service.write(service_template.format(**context))
         output_service.close()
 
         if os.path.exists(timer_file):
-            if (args.verbose):
+            if args.verbose:
                 sys.stdout.write("\toverwrite_timer: True\n")
             os.remove(timer_file)
         else:
-            if (args.verbose):
+            if args.verbose:
                 sys.stdout.write("\toverwrite_timer: False\n")
 
-        output_timer = open(timer_file, 'w')
+        output_timer = open(timer_file, "w")
         context = {
-            "short_name" : short_name,
-            "url" : url,
-            "target" : target,
-            "refresh" : refresh,
-            "refresh_on_boot" : refresh_on_boot,
+            "short_name": short_name,
+            "url": url,
+            "target": target,
+            "refresh": refresh,
+            "refresh_on_boot": refresh_on_boot,
         }
         output_timer.write(timer_template.format(**context))
         output_timer.close()
 
+
 if __name__ == "__main__":
     main()
diff --git a/playbooks/roles/linux-mirror/python/start-mirroring.py b/playbooks/roles/linux-mirror/python/start-mirroring.py
index 4e6b9ec2..03ede449 100755
--- a/playbooks/roles/linux-mirror/python/start-mirroring.py
+++ b/playbooks/roles/linux-mirror/python/start-mirroring.py
@@ -14,24 +14,25 @@ import os
 from pathlib import Path
 import subprocess
 
-topdir = os.environ.get('TOPDIR', '.')
+topdir = os.environ.get("TOPDIR", ".")
 yaml_dir = topdir + "/playbooks/roles/linux-mirror/linux-mirror-systemd/"
-default_mirrors_yaml = yaml_dir + 'mirrors.yaml'
+default_mirrors_yaml = yaml_dir + "mirrors.yaml"
+
+mirror_path = "/mirror/"
 
-mirror_path = '/mirror/'
 
 def mirror_entry(mirror, args):
-    short_name = mirror['short_name']
-    url = mirror['url']
-    target = mirror['target']
+    short_name = mirror["short_name"]
+    url = mirror["url"]
+    target = mirror["target"]
     reference = None
     reference_args = []
 
-    if mirror.get('reference'):
-        reference = mirror_path + mirror.get('reference')
-        reference_args = [ '--reference', reference ]
+    if mirror.get("reference"):
+        reference = mirror_path + mirror.get("reference")
+        reference_args = ["--reference", reference]
 
-    if (args.verbose):
+    if args.verbose:
         sys.stdout.write("\tshort_name: %s\n" % (short_name))
         sys.stdout.write("\turl: %s\n" % (url))
         sys.stdout.write("\ttarget: %s\n" % (url))
@@ -40,28 +41,31 @@ def mirror_entry(mirror, args):
         else:
             sys.stdout.write("\treference: %s\n" % (reference))
     cmd = [
-           'git',
-           '-C',
-           mirror_path,
-           'clone',
-           '--verbose',
-           '--progress',
-           '--mirror',
-           url,
-           target ]
+        "git",
+        "-C",
+        mirror_path,
+        "clone",
+        "--verbose",
+        "--progress",
+        "--mirror",
+        url,
+        target,
+    ]
     cmd = cmd + reference_args
     mirror_target = mirror_path + target
     if os.path.isdir(mirror_target):
         return
     sys.stdout.write("Mirroring: %s onto %s\n" % (short_name, mirror_target))
-    if (args.verbose):
+    if args.verbose:
         sys.stdout.write("%s\n" % (cmd))
         sys.stdout.write("%s\n" % (" ".join(cmd)))
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE,
-                               stderr=subprocess.STDOUT,
-                               close_fds=True,
-                               universal_newlines=True)
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     try:
         data = process.communicate(timeout=12000)
     except subprocess.TimeoutExpired:
@@ -73,12 +77,21 @@ def mirror_entry(mirror, args):
 
 
 def main():
-    parser = argparse.ArgumentParser(description='start-mirroring')
-    parser.add_argument('--yaml-mirror', metavar='<yaml_mirror>', type=str,
-                        default=default_mirrors_yaml,
-                        help='The yaml mirror input file.')
-    parser.add_argument('--verbose', const=True, default=False, action="store_const",
-                        help='Be verbose on otput.')
+    parser = argparse.ArgumentParser(description="start-mirroring")
+    parser.add_argument(
+        "--yaml-mirror",
+        metavar="<yaml_mirror>",
+        type=str,
+        default=default_mirrors_yaml,
+        help="The yaml mirror input file.",
+    )
+    parser.add_argument(
+        "--verbose",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Be verbose on otput.",
+    )
     args = parser.parse_args()
 
     if not os.path.isfile(args.yaml_mirror):
@@ -86,14 +99,13 @@ def main():
         sys.exit(1)
 
     # load the yaml input file
-    with open(f'{args.yaml_mirror}') as stream:
+    with open(f"{args.yaml_mirror}") as stream:
         yaml_vars = yaml.safe_load(stream)
 
-    if yaml_vars.get('mirrors') is None:
-        raise Exception(f"Missing mirrors descriptions on %s" %
-                        (args.yaml_mirror))
+    if yaml_vars.get("mirrors") is None:
+        raise Exception(f"Missing mirrors descriptions on %s" % (args.yaml_mirror))
 
-    if (args.verbose):
+    if args.verbose:
         sys.stdout.write("Yaml mirror input: %s\n\n" % args.yaml_mirror)
 
     # We do 3 passes, first to check the file has all requirements
@@ -103,25 +115,35 @@ def main():
     # The second pass is for mirrors which do not have a reference, the
     # third and final pass is for mirrors which do have a reference.
     total = 0
-    for mirror in yaml_vars['mirrors']:
+    for mirror in yaml_vars["mirrors"]:
         total = total + 1
 
-        if mirror.get('short_name') is None:
-            raise Exception(f"Missing required short_name on mirror item #%d on file: %s" % (total, args.yaml_mirror))
-        if mirror.get('url') is None:
-            raise Exception(f"Missing required url on mirror item #%d on file: %s" % (total, args.yaml_mirror))
-        if mirror.get('target') is None:
-            raise Exception(f"Missing required target for mirror %s on yaml file %s on item #%d" % (mirror.get('short_name'), args.yaml_mirror, total))
+        if mirror.get("short_name") is None:
+            raise Exception(
+                f"Missing required short_name on mirror item #%d on file: %s"
+                % (total, args.yaml_mirror)
+            )
+        if mirror.get("url") is None:
+            raise Exception(
+                f"Missing required url on mirror item #%d on file: %s"
+                % (total, args.yaml_mirror)
+            )
+        if mirror.get("target") is None:
+            raise Exception(
+                f"Missing required target for mirror %s on yaml file %s on item #%d"
+                % (mirror.get("short_name"), args.yaml_mirror, total)
+            )
 
     # Mirror trees without a reference first
-    for mirror in yaml_vars['mirrors']:
-        if not mirror.get('reference'):
+    for mirror in yaml_vars["mirrors"]:
+        if not mirror.get("reference"):
             mirror_entry(mirror, args)
 
     # Mirror trees which need a reference last
-    for mirror in yaml_vars['mirrors']:
-        if mirror.get('reference'):
+    for mirror in yaml_vars["mirrors"]:
+        if mirror.get("reference"):
             mirror_entry(mirror, args)
 
+
 if __name__ == "__main__":
     main()
diff --git a/scripts/check_commit_format.py b/scripts/check_commit_format.py
index f72f9d1a..3ac37944 100755
--- a/scripts/check_commit_format.py
+++ b/scripts/check_commit_format.py
@@ -11,11 +11,16 @@ import subprocess
 import sys
 import re
 
+
 def get_latest_commit_message():
     """Get the latest commit message"""
     try:
-        result = subprocess.run(['git', 'log', '-1', '--pretty=format:%B'],
-                              capture_output=True, text=True, check=True)
+        result = subprocess.run(
+            ["git", "log", "-1", "--pretty=format:%B"],
+            capture_output=True,
+            text=True,
+            check=True,
+        )
         return result.stdout
     except subprocess.CalledProcessError:
         print("Error: Failed to get commit message")
@@ -24,30 +29,35 @@ def get_latest_commit_message():
         print("Error: git command not found")
         return None
 
+
 def check_commit_format(commit_msg):
     """Check commit message formatting"""
     issues = []
     if not commit_msg:
         return ["No commit message found"]
-    lines = commit_msg.strip().split('\n')
+    lines = commit_msg.strip().split("\n")
     # Find Generated-by line
     generated_by_idx = None
     signed_off_by_idx = None
     for i, line in enumerate(lines):
-        if line.startswith('Generated-by: Claude AI'):
+        if line.startswith("Generated-by: Claude AI"):
             generated_by_idx = i
-        elif line.startswith('Signed-off-by:'):
+        elif line.startswith("Signed-off-by:"):
             signed_off_by_idx = i
     # If Generated-by is present, check formatting
     if generated_by_idx is not None:
         if signed_off_by_idx is None:
-            issues.append("Generated-by: Claude AI found but no Signed-off-by tag present")
+            issues.append(
+                "Generated-by: Claude AI found but no Signed-off-by tag present"
+            )
         else:
             # Check if Generated-by is immediately followed by Signed-off-by (no lines in between)
             if signed_off_by_idx != generated_by_idx + 1:
                 lines_between = signed_off_by_idx - generated_by_idx - 1
                 if lines_between > 0:
-                    issues.append(f"Generated-by: Claude AI must be immediately followed by Signed-off-by (found {lines_between} lines between them)")
+                    issues.append(
+                        f"Generated-by: Claude AI must be immediately followed by Signed-off-by (found {lines_between} lines between them)"
+                    )
                     for i in range(generated_by_idx + 1, signed_off_by_idx):
                         if lines[i].strip():
                             issues.append(f"  - Non-empty line at {i+1}: '{lines[i]}'")
@@ -55,6 +65,7 @@ def check_commit_format(commit_msg):
                             issues.append(f"  - Empty line at {i+1}")
     return issues
 
+
 def main():
     """Main function to check commit message format"""
     commit_msg = get_latest_commit_message()
@@ -81,5 +92,6 @@ def main():
         print("✅ Commit message formatting is correct!")
         return 0
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     sys.exit(main())
diff --git a/scripts/coccinelle/generation/check_for_atomic_calls.py b/scripts/coccinelle/generation/check_for_atomic_calls.py
index 5849f964..9fb06ef2 100755
--- a/scripts/coccinelle/generation/check_for_atomic_calls.py
+++ b/scripts/coccinelle/generation/check_for_atomic_calls.py
@@ -28,27 +28,27 @@ parser = argparse.ArgumentParser(
     description="Generate a Coccinelle checker for atomic context in transitive callers of a target function."
 )
 parser.add_argument(
-    "--levels", "-l",
+    "--levels",
+    "-l",
     type=int,
     required=True,
-    help="Maximum number of transitive caller levels to follow (e.g., 5)"
+    help="Maximum number of transitive caller levels to follow (e.g., 5)",
 )
 parser.add_argument(
-    "--target", "-t",
+    "--target",
+    "-t",
     type=str,
     required=True,
-    help="Target function to trace (e.g., __find_get_block_slow)"
+    help="Target function to trace (e.g., __find_get_block_slow)",
 )
 parser.add_argument(
-    "--output", "-o",
-    type=str,
-    required=True,
-    help="Output .cocci file to generate"
+    "--output", "-o", type=str, required=True, help="Output .cocci file to generate"
 )
 args = parser.parse_args()
 max_depth = args.levels
 target_func = args.target
 
+
 # Add a function to get the number of processors for parallel jobs
 def get_nprocs():
     try:
@@ -56,6 +56,7 @@ def get_nprocs():
     except:
         return 1  # Default to 1 if can't determine
 
+
 outfile = args.output
 header = f"""// SPDX-License-Identifier: GPL-2.0
 /// Autogenerated by gen_atomic_context_chain.py
@@ -141,10 +142,11 @@ register_caller(fn, None)
 """
 with open(outfile, "w") as f:
     f.write(header)
-    
+
     # Generate all the caller chain rules
     for level in range(1, max_depth + 1):
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} caller discovery
 @caller{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -162,12 +164,14 @@ transitive_caller << virtual.transitive_caller;
 @@
 print(f"🔄 Chain level {level}: {{fn}} calls {{transitive_caller}} at {{p[0].file}}:{{p[0].line}}")
 register_caller(fn, p[0].file)
-""")
+"""
+        )
 
     # Check for atomic context in each caller in our chain
     for level in range(1, max_depth + 1):
         # First, check for common atomic primitives
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} atomic context check - Common atomic primitives
 @atomiccheck{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -246,10 +250,12 @@ key = (p1[0].file, p1[0].line, transitive_caller)
 if key not in seen_atomic:
     seen_atomic.add(key)
     print(f"⚠️  WARNING: atomic context at level {level}: {{p1[0].current_element}} at {{p1[0].file}}:{{p1[0].line}} may reach {{transitive_caller}}() → eventually {target_func}()")
-""")
+"""
+        )
 
         # Check for lock-related functions directly calling our chain
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} atomic context check - Lock-related functions
 @atomic_fn_check{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -275,10 +281,12 @@ transitive_caller << virtual.transitive_caller;
 atomic_keywords = ['lock', 'irq', 'atomic', 'bh', 'intr', 'preempt', 'disable', 'napi', 'rcu']
 if any(kw in lock_fn.lower() for kw in atomic_keywords):
     print(f"⚠️  WARNING: potential atomic function at level {level}: {{lock_fn}} (name suggests lock handling) contains call to {{transitive_caller}}() at {{p[0].file}}:{{p[0].line}} → eventually {target_func}()")
-""")
+"""
+        )
 
         # Check for spinlock regions
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} spinlock region check
 @spinlock_region{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -319,10 +327,12 @@ key = (p1[0].file, p1[0].line, p3[0].line, transitive_caller)
 if key not in seen_spinlock_regions:
     seen_spinlock_regions.add(key)
     print(f"⚠️  WARNING: spinlock region at level {level}: {{p1[0].current_element}} at {{p1[0].file}}:{{p1[0].line}} contains call to {{transitive_caller}}() at line {{p3[0].line}} → eventually {target_func}()")
-""")
+"""
+        )
 
         # Look for functions that can't sleep
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} check - Can't sleep contexts
 @cant_sleep{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -346,10 +356,12 @@ p2 << cant_sleep{level}.p2;
 transitive_caller << virtual.transitive_caller;
 @@
 print(f"⚠️  WARNING: Non-sleeping context at {{p1[0].file}}:{{p1[0].line}} but calls {{transitive_caller}}() at line {{p2[0].line}} → eventually {target_func}()")
-""")
+"""
+        )
 
         # Check for network driver contexts
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} check - Network driver contexts (commonly atomic)
 @netdriver{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -389,10 +401,12 @@ p2 << netdriver{level}.p2;
 transitive_caller << virtual.transitive_caller;
 @@
 print(f"⚠️  WARNING: Network driver context at {{p1[0].file}}:{{p1[0].line}} but calls {{transitive_caller}}() at line {{p2[0].line}} → eventually {target_func}()")
-""")
+"""
+        )
 
         # Check for functions that might call from atomic context by name
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} check - Function with name suggesting atomic context
 @atomic_name{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -411,10 +425,12 @@ p << atomic_name{level}.p;
 transitive_caller << virtual.transitive_caller;
 @@
 print(f"⚠️  WARNING: Function with atomic-suggesting name {{atomic_fn}} calls {{transitive_caller}}() at {{p[0].file}}:{{p[0].line}} → eventually {target_func}()")
-""")
+"""
+        )
 
         # Check for sleep-incompatible contexts but target function might sleep
-        f.write(f"""
+        f.write(
+            f"""
 // Level {level} check - Target function called in context where might_sleep is used
 @might_sleep_check{level} depends on after_start exists@
 identifier virtual.transitive_caller;
@@ -436,8 +452,11 @@ p2 << might_sleep_check{level}.p2;
 transitive_caller << virtual.transitive_caller;
 @@
 print(f"⚠️  WARNING: Function has might_sleep() at {{p1[0].file}}:{{p1[0].line}} but also calls {{transitive_caller}}() at line {{p2[0].line}} → eventually {target_func}()")
-""")
+"""
+        )
 
     f.write("\n")
 
-print(f"✅ Generated {outfile} with enhanced atomic checks for `{target_func}` up to {max_depth} levels. Run with: make coccicheck MODE=report COCCI={outfile} J={get_nprocs()}")
+print(
+    f"✅ Generated {outfile} with enhanced atomic checks for `{target_func}` up to {max_depth} levels. Run with: make coccicheck MODE=report COCCI={outfile} J={get_nprocs()}"
+)
diff --git a/scripts/coccinelle/generation/check_for_sleepy_calls.py b/scripts/coccinelle/generation/check_for_sleepy_calls.py
index 87bd5b26..a32c9ee2 100755
--- a/scripts/coccinelle/generation/check_for_sleepy_calls.py
+++ b/scripts/coccinelle/generation/check_for_sleepy_calls.py
@@ -34,34 +34,35 @@ parser = argparse.ArgumentParser(
     description="Generate a Coccinelle checker to find sleeping functions called by a target function."
 )
 parser.add_argument(
-    "--function", "-f",
+    "--function",
+    "-f",
     type=str,
     required=True,
-    help="Target function to analyze (e.g., netif_rx_ni)"
+    help="Target function to analyze (e.g., netif_rx_ni)",
 )
 parser.add_argument(
-    "--max-depth", "-d",
+    "--max-depth",
+    "-d",
     type=int,
     default=3,
-    help="Maximum depth of function call chain to analyze (default: 3)"
+    help="Maximum depth of function call chain to analyze (default: 3)",
 )
 parser.add_argument(
-    "--output", "-o",
-    type=str,
-    required=True,
-    help="Output .cocci file to generate"
+    "--output", "-o", type=str, required=True, help="Output .cocci file to generate"
 )
 parser.add_argument(
-    "--sleepy-function", "-s",
+    "--sleepy-function",
+    "-s",
     type=str,
     default=None,
-    help="Specific function to check for that may cause sleeping (e.g., folio_wait_locked)"
+    help="Specific function to check for that may cause sleeping (e.g., folio_wait_locked)",
 )
 parser.add_argument(
-    "--expected", "-e",
+    "--expected",
+    "-e",
     action="store_true",
     default=False,
-    help="Indicate that the function is expected to have a sleep path (verified by manual inspection)"
+    help="Indicate that the function is expected to have a sleep path (verified by manual inspection)",
 )
 args = parser.parse_args()
 target_func = args.function
@@ -70,6 +71,7 @@ outfile = args.output
 sleepy_func = args.sleepy_function
 expected_to_sleep = args.expected
 
+
 # Add a function to get the number of processors for parallel jobs
 def get_nprocs():
     try:
@@ -77,17 +79,49 @@ def get_nprocs():
     except:
         return 1  # Default to 1 if can't determine
 
+
 # List of common functions known to sleep
 known_sleepy_functions = [
-    "msleep", "ssleep", "usleep_range", "schedule", "schedule_timeout",
-    "wait_event", "wait_for_completion", "mutex_lock", "down_read", "down_write",
-    "kthread_create", "kthread_run", "kmalloc", "__kmalloc", "kmem_cache_alloc", 
-    "vmalloc", "vzalloc", "kvmalloc", "kzalloc", "__vmalloc", "kvzalloc",
-    "sock_create", "sock_create_kern", "sock_create_lite", "sock_socket", 
-    "filp_open", "open_bdev_exclusive", "create_workqueue", 
-    "alloc_workqueue", "__alloc_workqueue_key", "request_threaded_irq",
-    "request_module", "try_module_get", "module_put", "printk", "GFP_KERNEL",
-    "copy_from_user", "copy_to_user", "__copy_from_user", "__copy_to_user"
+    "msleep",
+    "ssleep",
+    "usleep_range",
+    "schedule",
+    "schedule_timeout",
+    "wait_event",
+    "wait_for_completion",
+    "mutex_lock",
+    "down_read",
+    "down_write",
+    "kthread_create",
+    "kthread_run",
+    "kmalloc",
+    "__kmalloc",
+    "kmem_cache_alloc",
+    "vmalloc",
+    "vzalloc",
+    "kvmalloc",
+    "kzalloc",
+    "__vmalloc",
+    "kvzalloc",
+    "sock_create",
+    "sock_create_kern",
+    "sock_create_lite",
+    "sock_socket",
+    "filp_open",
+    "open_bdev_exclusive",
+    "create_workqueue",
+    "alloc_workqueue",
+    "__alloc_workqueue_key",
+    "request_threaded_irq",
+    "request_module",
+    "try_module_get",
+    "module_put",
+    "printk",
+    "GFP_KERNEL",
+    "copy_from_user",
+    "copy_to_user",
+    "__copy_from_user",
+    "__copy_to_user",
 ]
 
 # If a specific sleepy function is provided, only check for that one
@@ -97,9 +131,17 @@ if sleepy_func:
 
 # List of common GFP flags that indicate sleeping is allowed
 sleepy_gfp_flags = [
-    "GFP_KERNEL", "GFP_USER", "GFP_HIGHUSER", "GFP_DMA", 
-    "GFP_DMA32", "GFP_NOWAIT", "GFP_NOIO", "GFP_NOFS",
-    "__GFP_WAIT", "__GFP_IO", "__GFP_FS"
+    "GFP_KERNEL",
+    "GFP_USER",
+    "GFP_HIGHUSER",
+    "GFP_DMA",
+    "GFP_DMA32",
+    "GFP_NOWAIT",
+    "GFP_NOIO",
+    "GFP_NOFS",
+    "__GFP_WAIT",
+    "__GFP_IO",
+    "__GFP_FS",
 ]
 
 # Create a stats directory
@@ -111,8 +153,9 @@ with open(outfile, "w") as f:
     title = f"Detect if function '{target_func}' calls any functions that might sleep"
     if sleepy_func:
         title = f"Detect if function '{target_func}' calls '{sleepy_func}'"
-    
-    f.write(f"""// SPDX-License-Identifier: GPL-2.0
+
+    f.write(
+        f"""// SPDX-License-Identifier: GPL-2.0
 /// Autogenerated by check_for_sleepy_calls.py
 /// {title}
 // Options: --no-includes --include-headers
@@ -281,10 +324,12 @@ def save_stats():
     
     with open(stats_file, "w") as f:
         json.dump(stats, f, indent=2)
-""")
+"""
+    )
 
     # Define the rule to find direct calls to the target function
-    f.write(f"""
+    f.write(
+        f"""
 // Find direct function calls made by target function
 @find_calls@
 identifier fn;
@@ -305,11 +350,13 @@ total_calls_checked += 1
 register_call(target_func, fn, p[0].file, p[0].line)
 register_func_for_analysis(fn)
 save_stats()
-""")
+"""
+    )
 
     # Add direct checking for specific sleepy function if provided
     if sleepy_func:
-        f.write(f"""
+        f.write(
+            f"""
 // Direct check: Does target function call sleepy function directly?
 @direct_sleepy_call@
 position p;
@@ -327,12 +374,16 @@ global total_sleep_routines_checked
 total_sleep_routines_checked += 1
 register_sleep_point(target_func, sleepy_func, p[0].file, p[0].line, "directly calls target sleep function")
 save_stats()
-""")
-    
+"""
+        )
+
     # Generate rules for checking nested function calls
-    for depth in range(2, max_depth + 1):  # Start from 2 as level 1 is the direct call we already checked
+    for depth in range(
+        2, max_depth + 1
+    ):  # Start from 2 as level 1 is the direct call we already checked
         # Find functions called by functions at the previous level
-        f.write(f"""
+        f.write(
+            f"""
 // Level {depth} - Find functions called by level {depth-1} functions
 @find_calls_l{depth}@
 identifier fn1;
@@ -357,11 +408,13 @@ if fn1 in seen_funcs and fn1 != fn2:  # Avoid self-recursion
     register_call(fn1, fn2, p[0].file, p[0].line)
     register_func_for_analysis(fn2)
     save_stats()
-""")
+"""
+        )
 
         if sleepy_func:
             # If looking for a specific sleepy function, check at this level
-            f.write(f"""
+            f.write(
+                f"""
 // Level {depth} - Find calls to sleepy function
 @sleepy_call_l{depth}@
 identifier fn;
@@ -383,10 +436,12 @@ total_sleep_routines_checked += 1
 if fn in seen_funcs:
     register_sleep_point(fn, sleepy_func, p[0].file, p[0].line, f"level {depth} call to target sleep function")
     save_stats()
-""")
+"""
+            )
         else:
             # If doing general sleep checking, check for known sleepy functions at this level
-            f.write(f"""
+            f.write(
+                f"""
 // Level {depth} - Check for known sleepy functions
 @known_sleepers_l{depth}@
 identifier fn;
@@ -394,16 +449,22 @@ position p;
 @@
 fn(...) {{
   <...
-  (""")
+  ("""
+            )
             # Add all known sleepy functions to the pattern
             for i, sleepy_func_name in enumerate(known_sleepy_functions):
                 if i > 0:
-                    f.write(f"""
-  |""")
-                f.write(f"""
-  {sleepy_func_name}@p(...)""")
-            
-            f.write(f"""
+                    f.write(
+                        f"""
+  |"""
+                    )
+                f.write(
+                    f"""
+  {sleepy_func_name}@p(...)"""
+                )
+
+            f.write(
+                f"""
   )
   ...>
 }}
@@ -419,12 +480,14 @@ if fn in seen_funcs:
     sleep_func = p[0].current_element
     register_sleep_point(fn, sleep_func, p[0].file, p[0].line, f"level {depth} call to known sleeping function")
     save_stats()
-""")
+"""
+            )
 
     # Only add the other sleep detection rules if we're not constraining to a specific function
     if not sleepy_func:
         # Check for GFP_KERNEL and other sleepy allocation flags
-        f.write(f"""
+        f.write(
+            f"""
 // Check for sleepy memory allocation flags
 @check_sleepy_alloc@
 position p;
@@ -432,15 +495,21 @@ identifier fn;
 @@
 fn(...) {{
   <...
-  (""")
+  ("""
+        )
         # Add patterns for all sleepy GFP flags
         for i, flag in enumerate(sleepy_gfp_flags):
             if i > 0:
-                f.write(f"""
-  |""")
-            f.write(f"""
-  {flag}@p""")
-        f.write(f"""
+                f.write(
+                    f"""
+  |"""
+                )
+            f.write(
+                f"""
+  {flag}@p"""
+            )
+        f.write(
+            f"""
   )
   ...>
 }}
@@ -456,10 +525,12 @@ if fn in seen_funcs:
     flag = p[0].current_element
     register_sleep_point(fn, flag, p[0].file, p[0].line, "uses allocation flag that may sleep")
     save_stats()
-""")
+"""
+        )
 
         # Check for mutex locks
-        f.write(f"""
+        f.write(
+            f"""
 // Check for mutex locks
 @check_mutex@
 position p;
@@ -510,10 +581,12 @@ if fn in seen_funcs:
     lock_func = p[0].current_element
     register_sleep_point(fn, lock_func, p[0].file, p[0].line, "uses mutex or completion that may sleep")
     save_stats()
-""")
+"""
+        )
 
         # Check for might_sleep calls
-        f.write(f"""
+        f.write(
+            f"""
 // Check for explicit might_sleep calls
 @check_might_sleep@
 position p;
@@ -542,10 +615,12 @@ if fn in seen_funcs:
     sleep_func = p[0].current_element
     register_sleep_point(fn, sleep_func, p[0].file, p[0].line, "contains explicit might_sleep() call")
     save_stats()
-""")
+"""
+        )
 
         # Check for functions with names suggesting they might sleep
-        f.write(f"""
+        f.write(
+            f"""
 // Check for functions with sleep-suggesting names
 @check_sleep_names@
 position p;
@@ -574,10 +649,12 @@ if fn in seen_funcs:
             sleep_fn.startswith("local_")):
         register_sleep_point(fn, sleep_fn, p[0].file, p[0].line, "calls function with name suggesting it might sleep")
         save_stats()
-""")
+"""
+        )
 
     # Add a finalization rule that summarizes the findings
-    f.write(f"""
+    f.write(
+        f"""
 @finalize:python@
 @@
 # Save any final stats before finishing
@@ -669,7 +746,8 @@ try:
     shutil.rmtree(stats_dir)
 except Exception as e:
     print(f"Note: Could not clean up stats directory: {{e}}")
-""")
+"""
+    )
 
 msg = f"✅ Generated {outfile} to check if '{target_func}' might sleep"
 if sleepy_func:
diff --git a/scripts/detect_whitespace_issues.py b/scripts/detect_whitespace_issues.py
index 165a33e2..de5fef70 100755
--- a/scripts/detect_whitespace_issues.py
+++ b/scripts/detect_whitespace_issues.py
@@ -12,37 +12,40 @@ import os
 import sys
 from pathlib import Path
 
+
 def check_file_whitespace(file_path):
     """Check a single file for whitespace issues"""
     issues = []
 
     try:
-        with open(file_path, 'rb') as f:
+        with open(file_path, "rb") as f:
             content = f.read()
 
         # Skip binary files
-        if b'\0' in content:
+        if b"\0" in content:
             return issues
 
-        lines = content.decode('utf-8', errors='ignore').splitlines(keepends=True)
+        lines = content.decode("utf-8", errors="ignore").splitlines(keepends=True)
 
         # Check trailing whitespace
         for line_num, line in enumerate(lines, 1):
-            if line.rstrip('\n\r').endswith(' ') or line.rstrip('\n\r').endswith('\t'):
+            if line.rstrip("\n\r").endswith(" ") or line.rstrip("\n\r").endswith("\t"):
                 issues.append(f"Line {line_num}: Trailing whitespace")
 
         # Check missing newline at end of file
-        if content and not content.endswith(b'\n'):
+        if content and not content.endswith(b"\n"):
             issues.append("Missing newline at end of file")
 
         # Check for excessive blank lines (more than 2 consecutive)
         blank_count = 0
         for line_num, line in enumerate(lines, 1):
-            if line.strip() == '':
+            if line.strip() == "":
                 blank_count += 1
             else:
                 if blank_count > 2:
-                    issues.append(f"Line {line_num - blank_count}: {blank_count} consecutive blank lines")
+                    issues.append(
+                        f"Line {line_num - blank_count}: {blank_count} consecutive blank lines"
+                    )
                 blank_count = 0
 
     except Exception as e:
@@ -50,6 +53,7 @@ def check_file_whitespace(file_path):
 
     return issues
 
+
 def main():
     """Main function to scan for whitespace issues"""
     if len(sys.argv) > 1:
@@ -57,10 +61,15 @@ def main():
     else:
         # Default to git tracked files with modifications
         import subprocess
+
         try:
-            result = subprocess.run(['git', 'diff', '--name-only'],
-                                  capture_output=True, text=True, check=True)
-            paths = result.stdout.strip().split('\n') if result.stdout.strip() else []
+            result = subprocess.run(
+                ["git", "diff", "--name-only"],
+                capture_output=True,
+                text=True,
+                check=True,
+            )
+            paths = result.stdout.strip().split("\n") if result.stdout.strip() else []
             if not paths:
                 print("No modified files found in git")
                 return
@@ -82,7 +91,7 @@ def main():
 
         if path.is_file():
             # Skip certain file types
-            if path.suffix in ['.pyc', '.so', '.o', '.bin', '.jpg', '.png', '.gif']:
+            if path.suffix in [".pyc", ".so", ".o", ".bin", ".jpg", ".png", ".gif"]:
                 continue
 
             issues = check_file_whitespace(path)
@@ -93,7 +102,9 @@ def main():
                 for issue in issues:
                     print(f"  ⚠️  {issue}")
 
-    print(f"\nSummary: {total_issues} whitespace issues found in {files_with_issues} files")
+    print(
+        f"\nSummary: {total_issues} whitespace issues found in {files_with_issues} files"
+    )
 
     if total_issues > 0:
         print("\nTo fix these issues:")
@@ -105,5 +116,6 @@ def main():
         print("✅ No whitespace issues found!")
         return 0
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     sys.exit(main())
diff --git a/scripts/fix_whitespace_issues.py b/scripts/fix_whitespace_issues.py
index 3e69ea50..585ff8b9 100755
--- a/scripts/fix_whitespace_issues.py
+++ b/scripts/fix_whitespace_issues.py
@@ -12,19 +12,20 @@ import os
 import sys
 from pathlib import Path
 
+
 def fix_file_whitespace(file_path):
     """Fix whitespace issues in a single file"""
     issues_fixed = []
 
     try:
-        with open(file_path, 'rb') as f:
+        with open(file_path, "rb") as f:
             content = f.read()
 
         # Skip binary files
-        if b'\0' in content:
+        if b"\0" in content:
             return issues_fixed
 
-        original_content = content.decode('utf-8', errors='ignore')
+        original_content = content.decode("utf-8", errors="ignore")
         lines = original_content.splitlines(keepends=True)
         modified = False
 
@@ -33,12 +34,12 @@ def fix_file_whitespace(file_path):
         for line_num, line in enumerate(lines, 1):
             original_line = line
             # Remove trailing whitespace but preserve line endings
-            if line.endswith('\r\n'):
-                cleaned_line = line.rstrip(' \t\r\n') + '\r\n'
-            elif line.endswith('\n'):
-                cleaned_line = line.rstrip(' \t\n') + '\n'
+            if line.endswith("\r\n"):
+                cleaned_line = line.rstrip(" \t\r\n") + "\r\n"
+            elif line.endswith("\n"):
+                cleaned_line = line.rstrip(" \t\n") + "\n"
             else:
-                cleaned_line = line.rstrip(' \t')
+                cleaned_line = line.rstrip(" \t")
 
             if original_line != cleaned_line:
                 issues_fixed.append(f"Line {line_num}: Removed trailing whitespace")
@@ -52,7 +53,7 @@ def fix_file_whitespace(file_path):
         i = 0
         while i < len(new_lines):
             line = new_lines[i]
-            if line.strip() == '':
+            if line.strip() == "":
                 blank_count += 1
                 if blank_count <= 2:
                     final_lines.append(line)
@@ -65,15 +66,15 @@ def fix_file_whitespace(file_path):
             i += 1
 
         # Fix missing newline at end of file
-        new_content = ''.join(final_lines)
-        if new_content and not new_content.endswith('\n'):
-            new_content += '\n'
+        new_content = "".join(final_lines)
+        if new_content and not new_content.endswith("\n"):
+            new_content += "\n"
             issues_fixed.append("Added missing newline at end of file")
             modified = True
 
         # Write back if modified
         if modified:
-            with open(file_path, 'w', encoding='utf-8') as f:
+            with open(file_path, "w", encoding="utf-8") as f:
                 f.write(new_content)
 
     except Exception as e:
@@ -81,6 +82,7 @@ def fix_file_whitespace(file_path):
 
     return issues_fixed
 
+
 def main():
     """Main function to fix whitespace issues"""
     if len(sys.argv) > 1:
@@ -88,10 +90,15 @@ def main():
     else:
         # Default to git tracked files with modifications
         import subprocess
+
         try:
-            result = subprocess.run(['git', 'diff', '--name-only'],
-                                  capture_output=True, text=True, check=True)
-            paths = result.stdout.strip().split('\n') if result.stdout.strip() else []
+            result = subprocess.run(
+                ["git", "diff", "--name-only"],
+                capture_output=True,
+                text=True,
+                check=True,
+            )
+            paths = result.stdout.strip().split("\n") if result.stdout.strip() else []
             if not paths:
                 print("No modified files found in git")
                 return 0
@@ -113,7 +120,7 @@ def main():
 
         if path.is_file():
             # Skip certain file types
-            if path.suffix in ['.pyc', '.so', '.o', '.bin', '.jpg', '.png', '.gif']:
+            if path.suffix in [".pyc", ".so", ".o", ".bin", ".jpg", ".png", ".gif"]:
                 continue
 
             fixes = fix_file_whitespace(path)
@@ -133,5 +140,6 @@ def main():
         print("✅ No whitespace issues found to fix!")
         return 0
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     sys.exit(main())
diff --git a/scripts/generate_refs.py b/scripts/generate_refs.py
index 4bd179b9..45b74348 100755
--- a/scripts/generate_refs.py
+++ b/scripts/generate_refs.py
@@ -120,7 +120,7 @@ def _ref_generator_choices_static(args, conf_name, ref_name, ref_help):
         # Add "_USER_REF" suffix to avoid static duplicates when both user
         # and default Kconfig files exists. Fixes 'warning: choice value used
         # outside its choice group'
-        if 'refs' in args and args.refs != 0:
+        if "refs" in args and args.refs != 0:
             conf_name = conf_name + "_USER_REF"
         refs.update({ref_name: conf_name})
         f.write("config {}\n".format(conf_name))
@@ -328,7 +328,9 @@ def kreleases(args) -> None:
             for release in data["releases"]:
                 if release["moniker"] == args.moniker:
                     # Check if release.json is aa.bb.cc type
-                    if re.compile(r'^\d+\.\d+(\.\d+|-rc\d+)?$').match(release["version"]):
+                    if re.compile(r"^\d+\.\d+(\.\d+|-rc\d+)?$").match(
+                        release["version"]
+                    ):
                         reflist.append("v" + release["version"])
                     else:
                         reflist.append(release["version"])
diff --git a/scripts/honey-badger.py b/scripts/honey-badger.py
index 29c3966b..43dd45c2 100755
--- a/scripts/honey-badger.py
+++ b/scripts/honey-badger.py
@@ -21,16 +21,18 @@ KERNEL_PPA_URL = "https://kernel.ubuntu.com/mainline/"
 ARCH = "amd64"
 KERNEL_DIR = "/tmp/kernels"
 
+
 def is_dpkg_installed():
-    return shutil.which('dpkg') is not None
+    return shutil.which("dpkg") is not None
+
 
 def extract_deb(deb_file, tempdir, verbose=False, dest="/"):
     try:
         if verbose:
             print(f"Extracting {deb_file} onto {tempdir}")
         # Extract the ar archive
-        subprocess.run(['ar', 'x', deb_file], check=True, cwd=tempdir)
-        data_tarball = next(f for f in os.listdir(tempdir) if f.startswith('data.tar'))
+        subprocess.run(["ar", "x", deb_file], check=True, cwd=tempdir)
+        data_tarball = next(f for f in os.listdir(tempdir) if f.startswith("data.tar"))
         # Extract the data tarball to the correct locations
         with tarfile.open(os.path.join(tempdir, data_tarball)) as tar:
             tar.extractall(path=dest)
@@ -43,7 +45,8 @@ def extract_deb(deb_file, tempdir, verbose=False, dest="/"):
         if os.path.exists(tempdir):
             if verbose:
                 print(f"Removing temporary directory {tempdir}")
-            subprocess.run(['rm', '-rf', tempdir])
+            subprocess.run(["rm", "-rf", tempdir])
+
 
 def install_kernel_packages(package_files, verbose=False, use_ar=False, dest="/"):
     if use_ar:
@@ -56,49 +59,66 @@ def install_kernel_packages(package_files, verbose=False, use_ar=False, dest="/"
         for package in package_files:
             if verbose:
                 print("Running: dpkg %s" % package)
-            subprocess.run(['sudo', 'dpkg', '-i', package], check=True)
+            subprocess.run(["sudo", "dpkg", "-i", package], check=True)
+
 
 def parse_version(version):
-    match = re.match(r'v(\d+)\.(\d+)(?:\.(\d+))?(?:-(rc\d+))?', version)
+    match = re.match(r"v(\d+)\.(\d+)(?:\.(\d+))?(?:-(rc\d+))?", version)
     if match:
         major, minor, patch, rc = match.groups()
-        return (int(major), int(minor), int(patch) if patch else 0, rc if rc else '')
-    return (0, 0, 0, '')
+        return (int(major), int(minor), int(patch) if patch else 0, rc if rc else "")
+    return (0, 0, 0, "")
+
 
 def get_kernel_versions():
     response = requests.get(KERNEL_PPA_URL)
     response.raise_for_status()
-    soup = BeautifulSoup(response.text, 'html.parser')
-    versions = [href.strip('/') for link in soup.find_all('a') if (href := link.get('href')).startswith('v') and href.endswith('/')]
+    soup = BeautifulSoup(response.text, "html.parser")
+    versions = [
+        href.strip("/")
+        for link in soup.find_all("a")
+        if (href := link.get("href")).startswith("v") and href.endswith("/")
+    ]
 
     versions = sorted(versions, key=parse_version, reverse=True)
     return group_versions(versions)
 
+
 def group_versions(versions):
     grouped = []
-    for key, group in groupby(versions, lambda x: (parse_version(x)[0], parse_version(x)[1])):
+    for key, group in groupby(
+        versions, lambda x: (parse_version(x)[0], parse_version(x)[1])
+    ):
         group = sorted(group, key=parse_version, reverse=True)
         grouped.append(list(group))
     return grouped
 
+
 def verify_kernel_files(version):
     url = f"{KERNEL_PPA_URL}{version}/{ARCH}/"
     response = requests.get(url)
     if response.status_code != 200:
         return False
-    soup = BeautifulSoup(response.text, 'html.parser')
-    files = [a['href'] for a in soup.find_all('a') if a['href'].endswith('.deb')]
-    if any('linux-image-unsigned' in f for f in files) and any('linux-modules' in f for f in files):
+    soup = BeautifulSoup(response.text, "html.parser")
+    files = [a["href"] for a in soup.find_all("a") if a["href"].endswith(".deb")]
+    if any("linux-image-unsigned" in f for f in files) and any(
+        "linux-modules" in f for f in files
+    ):
         return True
     return False
 
+
 def download_and_install(file_type, version, verbose=False, use_ar=False, dest="/"):
     url = f"{KERNEL_PPA_URL}{version}/{ARCH}/"
     response = requests.get(url)
     response.raise_for_status()  # Ensure we raise an exception for failed requests
-    soup = BeautifulSoup(response.text, 'html.parser')
+    soup = BeautifulSoup(response.text, "html.parser")
 
-    deb_files = [a['href'] for a in soup.find_all('a') if a['href'].endswith('.deb') and file_type in a['href']]
+    deb_files = [
+        a["href"]
+        for a in soup.find_all("a")
+        if a["href"].endswith(".deb") and file_type in a["href"]
+    ]
     local_deb_files = []
 
     if not os.path.exists(KERNEL_DIR):
@@ -111,7 +131,7 @@ def download_and_install(file_type, version, verbose=False, use_ar=False, dest="
             print(f"Attempting to download from {full_url}...")
         r = requests.get(full_url, stream=True)
         if r.status_code == 200:
-            with open(local_file, 'wb') as f:
+            with open(local_file, "wb") as f:
                 f.write(r.content)
             if verbose:
                 print(f"Downloaded {local_file}.")
@@ -122,16 +142,38 @@ def download_and_install(file_type, version, verbose=False, use_ar=False, dest="
 
     install_kernel_packages(local_deb_files, verbose, use_ar, dest)
 
+
 def main():
     dpkg_installed = is_dpkg_installed()
     parser = argparse.ArgumentParser(description="Linux stable kernel honey badger")
-    parser.add_argument('--list', action='store_true', help='List available kernels')
-    parser.add_argument('--install', action='store_true', help='Install specified number of latest kernels')
-    parser.add_argument('--dest', type=str, help='Install the packages into the specified directory')
-    parser.add_argument('--verbose', action='store_true', help='Enable verbose output')
-    parser.add_argument('--use-ar', action='store_true', default=dpkg_installed, help='Do not use dpkg even if present')
-    parser.add_argument('--use-file', type=str, help='Skip download and just install this debian package file')
-    parser.add_argument('-c', '--count', type=int, default=1, help='Number of kernels to list or install')
+    parser.add_argument("--list", action="store_true", help="List available kernels")
+    parser.add_argument(
+        "--install",
+        action="store_true",
+        help="Install specified number of latest kernels",
+    )
+    parser.add_argument(
+        "--dest", type=str, help="Install the packages into the specified directory"
+    )
+    parser.add_argument("--verbose", action="store_true", help="Enable verbose output")
+    parser.add_argument(
+        "--use-ar",
+        action="store_true",
+        default=dpkg_installed,
+        help="Do not use dpkg even if present",
+    )
+    parser.add_argument(
+        "--use-file",
+        type=str,
+        help="Skip download and just install this debian package file",
+    )
+    parser.add_argument(
+        "-c",
+        "--count",
+        type=int,
+        default=1,
+        help="Number of kernels to list or install",
+    )
     args = parser.parse_args()
 
     kernel_versions_grouped = get_kernel_versions()
@@ -145,7 +187,9 @@ def main():
             break
         version = group[0]  # Pick the latest version in the group
         if args.verbose:
-            print(f"Verifying files for {version} at {KERNEL_PPA_URL}{version}/{ARCH}/...")
+            print(
+                f"Verifying files for {version} at {KERNEL_PPA_URL}{version}/{ARCH}/..."
+            )
         if verify_kernel_files(version):
             valid_versions.append(version)
         else:
@@ -157,7 +201,7 @@ def main():
         return
 
     if args.use_file and args.install:
-        pkgs = [ args.use_file ]
+        pkgs = [args.use_file]
         install_kernel_packages(pkgs, args.verbose, args.use_ar, args.dest)
         return
 
@@ -165,9 +209,12 @@ def main():
         for version in valid_versions:
             if args.verbose:
                 print(f"Installing kernel version {version}...")
-            files = ['linux-modules', 'linux-image-unsigned', 'linux-headers']
+            files = ["linux-modules", "linux-image-unsigned", "linux-headers"]
             for file_type in files:
-                download_and_install(file_type, version, args.verbose, args.use_ar, args.dest)
+                download_and_install(
+                    file_type, version, args.verbose, args.use_ar, args.dest
+                )
+
 
 if __name__ == "__main__":
     main()
diff --git a/scripts/spdxcheck.py b/scripts/spdxcheck.py
index d6fb62e2..3a1f3a95 100755
--- a/scripts/spdxcheck.py
+++ b/scripts/spdxcheck.py
@@ -11,31 +11,35 @@ import git
 import re
 import os
 
+
 class ParserException(Exception):
     def __init__(self, tok, txt):
         self.tok = tok
         self.txt = txt
 
+
 class SPDXException(Exception):
     def __init__(self, el, txt):
         self.el = el
         self.txt = txt
 
+
 class SPDXdata(object):
     def __init__(self):
         self.license_files = 0
         self.exception_files = 0
-        self.licenses = [ ]
-        self.exceptions = { }
+        self.licenses = []
+        self.exceptions = {}
+
 
 # Read the spdx data from the LICENSES directory
 def read_spdxdata(repo):
 
     # The subdirectories of LICENSES in the kernel source
     # Note: exceptions needs to be parsed as last directory.
-    #license_dirs = [ "preferred", "dual", "deprecated", "exceptions" ]
-    license_dirs = [ "preferred" ]
-    lictree = repo.head.commit.tree['LICENSES']
+    # license_dirs = [ "preferred", "dual", "deprecated", "exceptions" ]
+    license_dirs = ["preferred"]
+    lictree = repo.head.commit.tree["LICENSES"]
 
     spdx = SPDXdata()
 
@@ -46,50 +50,65 @@ def read_spdxdata(repo):
 
             exception = None
             for l in open(el.path).readlines():
-                if l.startswith('Valid-License-Identifier:'):
-                    lid = l.split(':')[1].strip().upper()
+                if l.startswith("Valid-License-Identifier:"):
+                    lid = l.split(":")[1].strip().upper()
                     if lid in spdx.licenses:
-                        raise SPDXException(el, 'Duplicate License Identifier: %s' %lid)
+                        raise SPDXException(
+                            el, "Duplicate License Identifier: %s" % lid
+                        )
                     else:
                         spdx.licenses.append(lid)
 
-                elif l.startswith('SPDX-Exception-Identifier:'):
-                    exception = l.split(':')[1].strip().upper()
+                elif l.startswith("SPDX-Exception-Identifier:"):
+                    exception = l.split(":")[1].strip().upper()
                     spdx.exceptions[exception] = []
 
-                elif l.startswith('SPDX-Licenses:'):
-                    for lic in l.split(':')[1].upper().strip().replace(' ', '').replace('\t', '').split(','):
+                elif l.startswith("SPDX-Licenses:"):
+                    for lic in (
+                        l.split(":")[1]
+                        .upper()
+                        .strip()
+                        .replace(" ", "")
+                        .replace("\t", "")
+                        .split(",")
+                    ):
                         if not lic in spdx.licenses:
-                            raise SPDXException(None, 'Exception %s missing license %s' %(exception, lic))
+                            raise SPDXException(
+                                None,
+                                "Exception %s missing license %s" % (exception, lic),
+                            )
                         spdx.exceptions[exception].append(lic)
 
                 elif l.startswith("License-Text:"):
                     if exception:
                         if not len(spdx.exceptions[exception]):
-                            raise SPDXException(el, 'Exception %s is missing SPDX-Licenses' %exception)
+                            raise SPDXException(
+                                el, "Exception %s is missing SPDX-Licenses" % exception
+                            )
                         spdx.exception_files += 1
                     else:
                         spdx.license_files += 1
                     break
     return spdx
 
+
 class id_parser(object):
 
-    reserved = [ 'AND', 'OR', 'WITH' ]
-    tokens = [ 'LPAR', 'RPAR', 'ID', 'EXC' ] + reserved
+    reserved = ["AND", "OR", "WITH"]
+    tokens = ["LPAR", "RPAR", "ID", "EXC"] + reserved
 
-    precedence = ( ('nonassoc', 'AND', 'OR'), )
+    precedence = (("nonassoc", "AND", "OR"),)
 
-    t_ignore = ' \t'
+    t_ignore = " \t"
 
     def __init__(self, spdx):
         self.spdx = spdx
         self.lasttok = None
         self.lastid = None
-        self.lexer = lex.lex(module = self, reflags = re.UNICODE)
+        self.lexer = lex.lex(module=self, reflags=re.UNICODE)
         # Initialize the parser. No debug file and no parser rules stored on disk
         # The rules are small enough to be generated on the fly
-        self.parser = yacc.yacc(module = self, write_tables = False, debug = False)
+        self.parser = yacc.yacc(module=self, write_tables=False, debug=False)
         self.lines_checked = 0
         self.checked = 0
         self.spdx_valid = 0
@@ -100,93 +119,95 @@ class id_parser(object):
     # Validate License and Exception IDs
     def validate(self, tok):
         id = tok.value.upper()
-        if tok.type == 'ID':
+        if tok.type == "ID":
             if not id in self.spdx.licenses:
-                raise ParserException(tok, 'Invalid License ID')
+                raise ParserException(tok, "Invalid License ID")
             self.lastid = id
-        elif tok.type == 'EXC':
+        elif tok.type == "EXC":
             if id not in self.spdx.exceptions:
-                raise ParserException(tok, 'Invalid Exception ID')
+                raise ParserException(tok, "Invalid Exception ID")
             if self.lastid not in self.spdx.exceptions[id]:
-                raise ParserException(tok, 'Exception not valid for license %s' %self.lastid)
+                raise ParserException(
+                    tok, "Exception not valid for license %s" % self.lastid
+                )
             self.lastid = None
-        elif tok.type != 'WITH':
+        elif tok.type != "WITH":
             self.lastid = None
 
     # Lexer functions
     def t_RPAR(self, tok):
-        r'\)'
+        r"\)"
         self.lasttok = tok.type
         return tok
 
     def t_LPAR(self, tok):
-        r'\('
+        r"\("
         self.lasttok = tok.type
         return tok
 
     def t_ID(self, tok):
-        r'[A-Za-z.0-9\-+]+'
+        r"[A-Za-z.0-9\-+]+"
 
-        if self.lasttok == 'EXC':
+        if self.lasttok == "EXC":
             print(tok)
-            raise ParserException(tok, 'Missing parentheses')
+            raise ParserException(tok, "Missing parentheses")
 
         tok.value = tok.value.strip()
         val = tok.value.upper()
 
         if val in self.reserved:
             tok.type = val
-        elif self.lasttok == 'WITH':
-            tok.type = 'EXC'
+        elif self.lasttok == "WITH":
+            tok.type = "EXC"
 
         self.lasttok = tok.type
         self.validate(tok)
         return tok
 
     def t_error(self, tok):
-        raise ParserException(tok, 'Invalid token')
+        raise ParserException(tok, "Invalid token")
 
     def p_expr(self, p):
-        '''expr : ID
-                | ID WITH EXC
-                | expr AND expr
-                | expr OR expr
-                | LPAR expr RPAR'''
+        """expr : ID
+        | ID WITH EXC
+        | expr AND expr
+        | expr OR expr
+        | LPAR expr RPAR"""
         pass
 
     def p_error(self, p):
         if not p:
-            raise ParserException(None, 'Unfinished license expression')
+            raise ParserException(None, "Unfinished license expression")
         else:
-            raise ParserException(p, 'Syntax error')
+            raise ParserException(p, "Syntax error")
 
     def parse(self, expr):
         self.lasttok = None
         self.lastid = None
-        self.parser.parse(expr, lexer = self.lexer)
+        self.parser.parse(expr, lexer=self.lexer)
 
     def parse_lines(self, fd, maxlines, fname):
         self.checked += 1
         self.curline = 0
         try:
             for line in fd:
-                line = line.decode(locale.getpreferredencoding(False), errors='ignore')
+                line = line.decode(locale.getpreferredencoding(False), errors="ignore")
                 self.curline += 1
                 if self.curline > maxlines:
                     break
                 self.lines_checked += 1
                 if line.find("SPDX-License-Identifier:") < 0:
                     continue
-                expr = line.split(':')[1].strip()
+                expr = line.split(":")[1].strip()
                 # Remove trailing comment closure
-                if line.strip().endswith('*/'):
-                    expr = expr.rstrip('*/').strip()
+                if line.strip().endswith("*/"):
+                    expr = expr.rstrip("*/").strip()
                 # Remove trailing xml comment closure
-                if line.strip().endswith('-->'):
-                    expr = expr.rstrip('-->').strip()
+                if line.strip().endswith("-->"):
+                    expr = expr.rstrip("-->").strip()
                 # Special case for SH magic boot code files
-                if line.startswith('LIST \"'):
-                    expr = expr.rstrip('\"').strip()
+                if line.startswith('LIST "'):
+                    expr = expr.rstrip('"').strip()
                 self.parse(expr)
                 self.spdx_valid += 1
                 #
@@ -199,11 +220,14 @@ class id_parser(object):
             if pe.tok:
                 col = line.find(expr) + pe.tok.lexpos
                 tok = pe.tok.value
-                sys.stdout.write('%s: %d:%d %s: %s\n' %(fname, self.curline, col, pe.txt, tok))
+                sys.stdout.write(
+                    "%s: %d:%d %s: %s\n" % (fname, self.curline, col, pe.txt, tok)
+                )
             else:
-                sys.stdout.write('%s: %d:0 %s\n' %(fname, self.curline, col, pe.txt))
+                sys.stdout.write("%s: %d:0 %s\n" % (fname, self.curline, col, pe.txt))
             self.spdx_errors += 1
 
+
 def scan_git_tree(tree):
     for el in tree.traverse():
         # Exclude stuff which would make pointless noise
@@ -214,25 +238,38 @@ def scan_git_tree(tree):
             continue
         if not os.path.isfile(el.path):
             continue
-        with open(el.path, 'rb') as fd:
+        with open(el.path, "rb") as fd:
             parser.parse_lines(fd, args.maxlines, el.path)
 
+
 def scan_git_subtree(tree, path):
-    for p in path.strip('/').split('/'):
+    for p in path.strip("/").split("/"):
         tree = tree[p]
     scan_git_tree(tree)
 
-if __name__ == '__main__':
 
-    ap = ArgumentParser(description='SPDX expression checker')
-    ap.add_argument('path', nargs='*', help='Check path or file. If not given full git tree scan. For stdin use "-"')
-    ap.add_argument('-m', '--maxlines', type=int, default=15,
-                    help='Maximum number of lines to scan in a file. Default 15')
-    ap.add_argument('-v', '--verbose', action='store_true', help='Verbose statistics output')
+if __name__ == "__main__":
+
+    ap = ArgumentParser(description="SPDX expression checker")
+    ap.add_argument(
+        "path",
+        nargs="*",
+        help='Check path or file. If not given full git tree scan. For stdin use "-"',
+    )
+    ap.add_argument(
+        "-m",
+        "--maxlines",
+        type=int,
+        default=15,
+        help="Maximum number of lines to scan in a file. Default 15",
+    )
+    ap.add_argument(
+        "-v", "--verbose", action="store_true", help="Verbose statistics output"
+    )
     args = ap.parse_args()
 
     # Sanity check path arguments
-    if '-' in args.path and len(args.path) > 1:
+    if "-" in args.path and len(args.path) > 1:
         sys.stderr.write('stdin input "-" must be the only path argument\n')
         sys.exit(1)
 
@@ -249,49 +286,49 @@ if __name__ == '__main__':
 
     except SPDXException as se:
         if se.el:
-            sys.stderr.write('%s: %s\n' %(se.el.path, se.txt))
+            sys.stderr.write("%s: %s\n" % (se.el.path, se.txt))
         else:
-            sys.stderr.write('%s\n' %se.txt)
+            sys.stderr.write("%s\n" % se.txt)
         sys.exit(1)
 
     except Exception as ex:
-        sys.stderr.write('FAIL: %s\n' %ex)
-        sys.stderr.write('%s\n' %traceback.format_exc())
+        sys.stderr.write("FAIL: %s\n" % ex)
+        sys.stderr.write("%s\n" % traceback.format_exc())
         sys.exit(1)
 
     try:
-        if len(args.path) and args.path[0] == '-':
-            stdin = os.fdopen(sys.stdin.fileno(), 'rb')
-            parser.parse_lines(stdin, args.maxlines, '-')
+        if len(args.path) and args.path[0] == "-":
+            stdin = os.fdopen(sys.stdin.fileno(), "rb")
+            parser.parse_lines(stdin, args.maxlines, "-")
         else:
             if args.path:
                 for p in args.path:
                     if os.path.isfile(p):
-                        parser.parse_lines(open(p, 'rb'), args.maxlines, p)
+                        parser.parse_lines(open(p, "rb"), args.maxlines, p)
                     elif os.path.isdir(p):
                         scan_git_subtree(repo.head.reference.commit.tree, p)
                     else:
-                        sys.stderr.write('path %s does not exist\n' %p)
+                        sys.stderr.write("path %s does not exist\n" % p)
                         sys.exit(1)
             else:
                 # Full git tree scan
                 scan_git_tree(repo.head.commit.tree)
 
             if args.verbose:
-                sys.stderr.write('\n')
-                sys.stderr.write('License files:     %12d\n' %spdx.license_files)
-                sys.stderr.write('Exception files:   %12d\n' %spdx.exception_files)
-                sys.stderr.write('License IDs        %12d\n' %len(spdx.licenses))
-                sys.stderr.write('Exception IDs      %12d\n' %len(spdx.exceptions))
-                sys.stderr.write('\n')
-                sys.stderr.write('Files checked:     %12d\n' %parser.checked)
-                sys.stderr.write('Lines checked:     %12d\n' %parser.lines_checked)
-                sys.stderr.write('Files with SPDX:   %12d\n' %parser.spdx_valid)
-                sys.stderr.write('Files with errors: %12d\n' %parser.spdx_errors)
+                sys.stderr.write("\n")
+                sys.stderr.write("License files:     %12d\n" % spdx.license_files)
+                sys.stderr.write("Exception files:   %12d\n" % spdx.exception_files)
+                sys.stderr.write("License IDs        %12d\n" % len(spdx.licenses))
+                sys.stderr.write("Exception IDs      %12d\n" % len(spdx.exceptions))
+                sys.stderr.write("\n")
+                sys.stderr.write("Files checked:     %12d\n" % parser.checked)
+                sys.stderr.write("Lines checked:     %12d\n" % parser.lines_checked)
+                sys.stderr.write("Files with SPDX:   %12d\n" % parser.spdx_valid)
+                sys.stderr.write("Files with errors: %12d\n" % parser.spdx_errors)
 
             sys.exit(0)
 
     except Exception as ex:
-        sys.stderr.write('FAIL: %s\n' %ex)
-        sys.stderr.write('%s\n' %traceback.format_exc())
+        sys.stderr.write("FAIL: %s\n" % ex)
+        sys.stderr.write("%s\n" % traceback.format_exc())
         sys.exit(1)
diff --git a/scripts/update_ssh_config_guestfs.py b/scripts/update_ssh_config_guestfs.py
index 4d178d49..8b212a9c 100755
--- a/scripts/update_ssh_config_guestfs.py
+++ b/scripts/update_ssh_config_guestfs.py
@@ -30,6 +30,7 @@ ssh_template = """Host {name} {addr}
 	LogLevel FATAL
 """
 
+
 # We take the first IPv4 address on the first non-loopback interface.
 def get_addr(name):
     attempt = 0
@@ -38,7 +39,15 @@ def get_addr(name):
         if attempt > 60:
             raise Exception(f"Unable to get an address for {name} after 60s")
 
-        result = subprocess.run(['/usr/bin/virsh','qemu-agent-command',name,'{"execute":"guest-network-get-interfaces"}'], capture_output=True)
+        result = subprocess.run(
+            [
+                "/usr/bin/virsh",
+                "qemu-agent-command",
+                name,
+                '{"execute":"guest-network-get-interfaces"}',
+            ],
+            capture_output=True,
+        )
         # Did it error out? Sleep and try again.
         if result.returncode != 0:
             time.sleep(1)
@@ -48,15 +57,15 @@ def get_addr(name):
         netinfo = json.loads(result.stdout)
 
         ret = None
-        for iface in netinfo['return']:
-            if iface['name'] == 'lo':
+        for iface in netinfo["return"]:
+            if iface["name"] == "lo":
                 continue
-            if 'ip-addresses' not in iface:
+            if "ip-addresses" not in iface:
                 continue
-            for addr in iface['ip-addresses']:
-                if addr['ip-address-type'] != 'ipv4':
+            for addr in iface["ip-addresses"]:
+                if addr["ip-address-type"] != "ipv4":
                     continue
-                ret = addr['ip-address']
+                ret = addr["ip-address"]
                 break
 
         # If we didn't get an address, try again
@@ -64,11 +73,12 @@ def get_addr(name):
             return ret
         time.sleep(1)
 
+
 def main():
-    topdir = os.environ.get('TOPDIR', '.')
+    topdir = os.environ.get("TOPDIR", ".")
 
     # load extra_vars
-    with open(f'{topdir}/extra_vars.yaml') as stream:
+    with open(f"{topdir}/extra_vars.yaml") as stream:
         extra_vars = yaml.safe_load(stream)
 
     # slurp in the guestfs_nodes list
@@ -76,23 +86,28 @@ def main():
         nodes = yaml.safe_load(stream)
 
     if extra_vars.get("topdir_path_has_sha256sum", False):
-        ssh_config = f'{Path.home()}/.ssh/config_kdevops_{extra_vars["topdir_path_sha256sum"]}'
+        ssh_config = (
+            f'{Path.home()}/.ssh/config_kdevops_{extra_vars["topdir_path_sha256sum"]}'
+        )
     else:
-        ssh_config = f'{Path.home()}/.ssh/config_kdevops_{extra_vars["kdevops_host_prefix"]}'
+        ssh_config = (
+            f'{Path.home()}/.ssh/config_kdevops_{extra_vars["kdevops_host_prefix"]}'
+        )
 
     # make a stanza for each node
-    sshconf = open(ssh_config, 'w')
-    for node in nodes['guestfs_nodes']:
-        name = node['name']
+    sshconf = open(ssh_config, "w")
+    for node in nodes["guestfs_nodes"]:
+        name = node["name"]
         addr = get_addr(name)
         context = {
-            "name" : name,
-            "addr" : addr,
-            "sshkey" : f"{extra_vars['guestfs_path']}/{name}/ssh/id_ed25519"
+            "name": name,
+            "addr": addr,
+            "sshkey": f"{extra_vars['guestfs_path']}/{name}/ssh/id_ed25519",
         }
         sshconf.write(ssh_template.format(**context))
     sshconf.close()
     os.chmod(ssh_config, 0o600)
 
+
 if __name__ == "__main__":
     main()
diff --git a/scripts/workflows/blktests/blktests_watchdog.py b/scripts/workflows/blktests/blktests_watchdog.py
index 0cf7af98..2dd4444b 100755
--- a/scripts/workflows/blktests/blktests_watchdog.py
+++ b/scripts/workflows/blktests/blktests_watchdog.py
@@ -15,11 +15,14 @@ import configparser
 import argparse
 from itertools import chain
 
+
 def print_blktest_host_status(host, verbose, basedir, config):
     kernel = kssh.get_uname(host).rstrip()
     section = blktests.get_section(host, config)
-    (last_test, last_test_time, current_time_str, delta_seconds, stall_suspect) = blktests.get_blktest_host(host, basedir, kernel, section, config)
-    checktime =  blktests.get_last_run_time(host, basedir, kernel, section, last_test)
+    (last_test, last_test_time, current_time_str, delta_seconds, stall_suspect) = (
+        blktests.get_blktest_host(host, basedir, kernel, section, config)
+    )
+    checktime = blktests.get_last_run_time(host, basedir, kernel, section, last_test)
 
     percent_done = 0
     if checktime > 0:
@@ -38,12 +41,26 @@ def print_blktest_host_status(host, verbose, basedir, config):
             sys.stdout.write("Last    test       : None\n")
         else:
             percent_done_str = "%.0f%%" % (0)
-            sys.stdout.write("%35s%20s%20s%20s%20s%15s%30s\n" % (host, "None", percent_done_str, 0, 0, stall_str, kernel))
+            sys.stdout.write(
+                "%35s%20s%20s%20s%20s%15s%30s\n"
+                % (host, "None", percent_done_str, 0, 0, stall_str, kernel)
+            )
         return
 
     if not verbose:
         percent_done_str = "%.0f%%" % (percent_done)
-        sys.stdout.write("%35s%20s%20s%20s%20s%15s%30s\n" % (host, last_test, percent_done_str, str(delta_seconds), str(checktime), stall_str, kernel))
+        sys.stdout.write(
+            "%35s%20s%20s%20s%20s%15s%30s\n"
+            % (
+                host,
+                last_test,
+                percent_done_str,
+                str(delta_seconds),
+                str(checktime),
+                stall_str,
+                kernel,
+            )
+        )
         return
 
     sys.stdout.write("Host               : %s\n" % (host))
@@ -62,23 +79,37 @@ def print_blktest_host_status(host, verbose, basedir, config):
         sys.stdout.write("OK")
     sys.stdout.write("\n")
 
+
 def _main():
-    parser = argparse.ArgumentParser(description='blktest-watchdog')
-    parser.add_argument('hostfile', metavar='<ansible hostfile>', type=str,
-                        default='hosts',
-                        help='Ansible hostfile to use')
-    parser.add_argument('hostsection', metavar='<ansible hostsection>', type=str,
-                        default='baseline',
-                        help='The name of the section to read hosts from')
-    parser.add_argument('--verbose', const=True, default=False, action="store_const",
-                        help='Be verbose on otput.')
+    parser = argparse.ArgumentParser(description="blktest-watchdog")
+    parser.add_argument(
+        "hostfile",
+        metavar="<ansible hostfile>",
+        type=str,
+        default="hosts",
+        help="Ansible hostfile to use",
+    )
+    parser.add_argument(
+        "hostsection",
+        metavar="<ansible hostsection>",
+        type=str,
+        default="baseline",
+        help="The name of the section to read hosts from",
+    )
+    parser.add_argument(
+        "--verbose",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Be verbose on otput.",
+    )
     args = parser.parse_args()
 
     if not os.path.isfile(args.hostfile):
         sys.stdout.write("%s does not exist\n" % (args.hostfile))
         sys.exit(1)
 
-    dotconfig = os.path.dirname(os.path.abspath(args.hostfile)) + '/.config'
+    dotconfig = os.path.dirname(os.path.abspath(args.hostfile)) + "/.config"
     config = blktests.get_config(dotconfig)
     if not config:
         sys.stdout.write("%s does not exist\n" % (dotconfig))
@@ -86,9 +117,21 @@ def _main():
     basedir = os.path.dirname(dotconfig)
 
     hosts = blktests.get_hosts(args.hostfile, args.hostsection)
-    sys.stdout.write("%35s%20s%20s%20s%20s%15s%30s\n" % ("Hostname", "Test-name", "Completion %", "runtime(s)", "last-runtime(s)", "Stall-status", "Kernel"))
+    sys.stdout.write(
+        "%35s%20s%20s%20s%20s%15s%30s\n"
+        % (
+            "Hostname",
+            "Test-name",
+            "Completion %",
+            "runtime(s)",
+            "last-runtime(s)",
+            "Stall-status",
+            "Kernel",
+        )
+    )
     for h in hosts:
         print_blktest_host_status(h, args.verbose, basedir, config)
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     ret = _main()
diff --git a/scripts/workflows/cxl/gen_qemu_cxl.py b/scripts/workflows/cxl/gen_qemu_cxl.py
index 778567ea..92755f12 100755
--- a/scripts/workflows/cxl/gen_qemu_cxl.py
+++ b/scripts/workflows/cxl/gen_qemu_cxl.py
@@ -3,114 +3,213 @@
 import argparse
 import os
 
+
 def qemu_print(kind, value, last=False):
     global args
-    if args.format == 'xml':
-        print('<qemu:arg value=\'%s\'/>' % kind)
-        print('<qemu:arg value=\'%s\'/>' % value)
+    if args.format == "xml":
+        print("<qemu:arg value='%s'/>" % kind)
+        print("<qemu:arg value='%s'/>" % value)
     else:
-        print('%s %s %s' % (kind, value, '' if last else '\\'))
+        print("%s %s %s" % (kind, value, "" if last else "\\"))
+
 
 def host_bridge(hb_id, bus, addr):
-    return 'pxb-cxl,bus=pcie.0,id=cxl.%d,bus_nr=0x%x,addr=0x%x' % (hb_id, bus, addr)
+    return "pxb-cxl,bus=pcie.0,id=cxl.%d,bus_nr=0x%x,addr=0x%x" % (hb_id, bus, addr)
+
 
 def root_port(rp_id, hb_id, port, slot):
-    return 'cxl-rp,port=%d,bus=cxl.%d,id=cxl_rp%d,chassis=0,slot=%d' % (port, hb_id, rp_id, slot)
+    return "cxl-rp,port=%d,bus=cxl.%d,id=cxl_rp%d,chassis=0,slot=%d" % (
+        port,
+        hb_id,
+        rp_id,
+        slot,
+    )
+
 
 def switch(rp_id):
-    return 'cxl-upstream,bus=cxl_rp%d,id=cxl_switch%d,addr=0.0,multifunction=on' % (rp_id, rp_id)
+    return "cxl-upstream,bus=cxl_rp%d,id=cxl_switch%d,addr=0.0,multifunction=on" % (
+        rp_id,
+        rp_id,
+    )
+
 
 def mailbox(rp_id):
-    return 'cxl-switch-mailbox-cci,bus=cxl_rp%d,addr=0.1,target=cxl_switch%d' % (rp_id, rp_id)
+    return "cxl-switch-mailbox-cci,bus=cxl_rp%d,addr=0.1,target=cxl_switch%d" % (
+        rp_id,
+        rp_id,
+    )
+
 
 def downstream_port(dport_id, dport, rp_id, slot):
-    return 'cxl-downstream,port=%d,bus=cxl_switch%d,id=cxl_dport%d,chassis=0,slot=%d' % (dport, rp_id, dport_id, slot)
+    return (
+        "cxl-downstream,port=%d,bus=cxl_switch%d,id=cxl_dport%d,chassis=0,slot=%d"
+        % (dport, rp_id, dport_id, slot)
+    )
+
 
 def memdev(dport_id, path, sizestr, sizeval, create):
-    filename = '%s/cxl_mem%d.raw' % (path, dport_id)
-    if not(os.path.exists(filename)) and create:
-        if not(os.path.exists(path)):
-            print('ERROR: Tried to create memdev file but directory %s does not exist.' % path)
+    filename = "%s/cxl_mem%d.raw" % (path, dport_id)
+    if not (os.path.exists(filename)) and create:
+        if not (os.path.exists(path)):
+            print(
+                "ERROR: Tried to create memdev file but directory %s does not exist."
+                % path
+            )
             exit(1)
         os.umask(0)
-        with open(filename, 'wb') as file:
+        with open(filename, "wb") as file:
             file.truncate(sizeval)
-    return 'memory-backend-file,id=cxl_memdev%d,share=on,mem-path=%s,size=%s' % (dport_id, filename, sizestr)
+    return "memory-backend-file,id=cxl_memdev%d,share=on,mem-path=%s,size=%s" % (
+        dport_id,
+        filename,
+        sizestr,
+    )
+
 
 def lsa(path, create):
-    filename = '%s/cxl_lsa.raw' % path
-    if not(os.path.exists(filename)) and create:
-        if not(os.path.exists(path)):
-            print('ERROR: Tried to create lsa file but directory %s does not exist.' % path)
+    filename = "%s/cxl_lsa.raw" % path
+    if not (os.path.exists(filename)) and create:
+        if not (os.path.exists(path)):
+            print(
+                "ERROR: Tried to create lsa file but directory %s does not exist."
+                % path
+            )
             exit(1)
         os.umask(0)
-        with open(filename, 'wb') as file:
+        with open(filename, "wb") as file:
             file.truncate(256 * 1024 * 1024)
-    return 'memory-backend-file,id=cxl_lsa,share=on,mem-path=%s,size=256M' % filename
+    return "memory-backend-file,id=cxl_lsa,share=on,mem-path=%s,size=256M" % filename
 
 
 def type3(dport_id):
-    return 'cxl-type3,bus=cxl_dport%d,memdev=cxl_memdev%d,lsa=cxl_lsa,id=cxl_mem%d' % (dport_id, dport_id, dport_id)
+    return "cxl-type3,bus=cxl_dport%d,memdev=cxl_memdev%d,lsa=cxl_lsa,id=cxl_mem%d" % (
+        dport_id,
+        dport_id,
+        dport_id,
+    )
+
 
 def fmw(num_hb):
-    s = ''
+    s = ""
     for hb in range(num_hb):
-        s += 'cxl-fmw.0.targets.%d=cxl.%d,' % (hb, hb)
-    return s + 'cxl-fmw.0.size=8G,cxl-fmw.0.interleave-granularity=256'
-
-parser = argparse.ArgumentParser(description='QEMU CXL configuration generator', usage='%(prog)s [options]')
-parser.add_argument('-m', '--memdev-path', dest='memdev_path',
-                    help='Path to location of backing memdev files', required=True)
-parser.add_argument('-s', '--size', dest='size',
-                    help='Size of each memory device in bytes (i.e. 512M, 16G)', required=True)
-parser.add_argument('-c', '--create-memdev-files', dest='create_memdevs',
-                    help='Create memdev file if not found', action='store_true', default=False)
-parser.add_argument('-f', '--format', dest='format',
-                    help='Format of QEMU args',
-                    default='cmdline', choices=['cmdline', 'xml'])
-parser.add_argument('-b', '--host-bridges', dest='num_hb',
-                    help='Number of host bridges',
-                    type=int, default=1, choices=range(1,5))
-parser.add_argument('-r', '--root-ports', dest='num_rp',
-                    help='Number of root ports per host bridge',
-                    type=int, default=1, choices=range(1,5))
-parser.add_argument('-d', '--downstream-ports', dest='num_dport',
-                    help='Number of downstream ports per switch',
-                    type=int, default=1, choices=range(1,9))
-parser.add_argument('-p', '--pci-bus-number', dest='bus_nr',
-                    help='PCI bus number for first host bridge (default: 0x38)',
-                    type=int, default=0x38)
-parser.add_argument('--bus-alloc-per-host-bridge', dest='bus_per',
-                    help='Number of PCI buses to allocate per host bridge (default: 16)',
-                    type=int, default=16)
-parser.add_argument('--pci-function-number', dest='func_nr',
-                    help='Starting PCI function number for host bridges on root PCI bus 0 (default: 9)',
-                    type=int, default=9)
+        s += "cxl-fmw.0.targets.%d=cxl.%d," % (hb, hb)
+    return s + "cxl-fmw.0.size=8G,cxl-fmw.0.interleave-granularity=256"
+
+
+parser = argparse.ArgumentParser(
+    description="QEMU CXL configuration generator", usage="%(prog)s [options]"
+)
+parser.add_argument(
+    "-m",
+    "--memdev-path",
+    dest="memdev_path",
+    help="Path to location of backing memdev files",
+    required=True,
+)
+parser.add_argument(
+    "-s",
+    "--size",
+    dest="size",
+    help="Size of each memory device in bytes (i.e. 512M, 16G)",
+    required=True,
+)
+parser.add_argument(
+    "-c",
+    "--create-memdev-files",
+    dest="create_memdevs",
+    help="Create memdev file if not found",
+    action="store_true",
+    default=False,
+)
+parser.add_argument(
+    "-f",
+    "--format",
+    dest="format",
+    help="Format of QEMU args",
+    default="cmdline",
+    choices=["cmdline", "xml"],
+)
+parser.add_argument(
+    "-b",
+    "--host-bridges",
+    dest="num_hb",
+    help="Number of host bridges",
+    type=int,
+    default=1,
+    choices=range(1, 5),
+)
+parser.add_argument(
+    "-r",
+    "--root-ports",
+    dest="num_rp",
+    help="Number of root ports per host bridge",
+    type=int,
+    default=1,
+    choices=range(1, 5),
+)
+parser.add_argument(
+    "-d",
+    "--downstream-ports",
+    dest="num_dport",
+    help="Number of downstream ports per switch",
+    type=int,
+    default=1,
+    choices=range(1, 9),
+)
+parser.add_argument(
+    "-p",
+    "--pci-bus-number",
+    dest="bus_nr",
+    help="PCI bus number for first host bridge (default: 0x38)",
+    type=int,
+    default=0x38,
+)
+parser.add_argument(
+    "--bus-alloc-per-host-bridge",
+    dest="bus_per",
+    help="Number of PCI buses to allocate per host bridge (default: 16)",
+    type=int,
+    default=16,
+)
+parser.add_argument(
+    "--pci-function-number",
+    dest="func_nr",
+    help="Starting PCI function number for host bridges on root PCI bus 0 (default: 9)",
+    type=int,
+    default=9,
+)
 
 args = parser.parse_args()
 
-suffix_dict = {'M': 1024 ** 2, 'G': 1024 ** 3}
+suffix_dict = {"M": 1024**2, "G": 1024**3}
 suffix = args.size[-1].upper()
-if not(suffix in suffix_dict):
-    print('ERROR: size must end in M (for MiB) or G (for GiB)')
+if not (suffix in suffix_dict):
+    print("ERROR: size must end in M (for MiB) or G (for GiB)")
     exit(1)
 size = int(args.size[:-1]) * suffix_dict[suffix]
 
 slot = 0
-qemu_print('-machine', 'cxl=on')
-qemu_print('-object', lsa(args.memdev_path, args.create_memdevs))
+qemu_print("-machine", "cxl=on")
+qemu_print("-object", lsa(args.memdev_path, args.create_memdevs))
 for hb in range(args.num_hb):
-    qemu_print('-device', host_bridge(hb, args.bus_nr + hb * args.bus_per, hb + args.func_nr))
+    qemu_print(
+        "-device", host_bridge(hb, args.bus_nr + hb * args.bus_per, hb + args.func_nr)
+    )
     for rp in range(args.num_rp):
         rp_id = hb * args.num_rp + rp
-        qemu_print('-device', root_port(rp_id, hb, rp, slot))
+        qemu_print("-device", root_port(rp_id, hb, rp, slot))
         slot += 1
-        qemu_print('-device', switch(rp_id))
-        qemu_print('-device', mailbox(rp_id))
+        qemu_print("-device", switch(rp_id))
+        qemu_print("-device", mailbox(rp_id))
         for dport in range(args.num_dport):
-            dport_id = rp_id * args.num_dport + dport;
-            qemu_print('-device', downstream_port(dport_id, dport, rp_id, slot))
+            dport_id = rp_id * args.num_dport + dport
+            qemu_print("-device", downstream_port(dport_id, dport, rp_id, slot))
             slot += 1
-            qemu_print('-object', memdev(dport_id, args.memdev_path, args.size, size, args.create_memdevs))
-            qemu_print('-device', type3(dport_id))
-qemu_print('-M', fmw(args.num_hb), last=True)
+            qemu_print(
+                "-object",
+                memdev(
+                    dport_id, args.memdev_path, args.size, size, args.create_memdevs
+                ),
+            )
+            qemu_print("-device", type3(dport_id))
+qemu_print("-M", fmw(args.num_hb), last=True)
diff --git a/scripts/workflows/fstests/fstests_watchdog.py b/scripts/workflows/fstests/fstests_watchdog.py
index 3fef5484..beb229e6 100755
--- a/scripts/workflows/fstests/fstests_watchdog.py
+++ b/scripts/workflows/fstests/fstests_watchdog.py
@@ -17,6 +17,7 @@ import configparser
 import argparse
 from itertools import chain
 
+
 def print_fstest_host_status(host, verbose, use_remote, use_ssh, basedir, config):
     if "CONFIG_DEVCONFIG_ENABLE_SYSTEMD_JOURNAL_REMOTE" in config and not use_ssh:
         configured_kernel = None
@@ -35,8 +36,11 @@ def print_fstest_host_status(host, verbose, use_remote, use_ssh, basedir, config
         kernel = kssh.get_uname(host).rstrip()
 
     section = fstests.get_section(host, config)
-    (last_test, last_test_time, current_time_str, delta_seconds, stall_suspect) = \
-        fstests.get_fstest_host(use_remote, use_ssh, host, basedir, kernel, section, config)
+    (last_test, last_test_time, current_time_str, delta_seconds, stall_suspect) = (
+        fstests.get_fstest_host(
+            use_remote, use_ssh, host, basedir, kernel, section, config
+        )
+    )
 
     checktime = fstests.get_checktime(host, basedir, kernel, section, last_test)
     percent_done = (delta_seconds * 100 / checktime) if checktime > 0 else 0
@@ -49,10 +53,9 @@ def print_fstest_host_status(host, verbose, use_remote, use_ssh, basedir, config
             stall_str = "Hung-Stalled"
 
     crash_state = "OK"
-    watchdog = KernelCrashWatchdog(host_name=host,
-                                   decode_crash=True,
-                                   reset_host=True,
-                                   save_warnings=True)
+    watchdog = KernelCrashWatchdog(
+        host_name=host, decode_crash=True, reset_host=True, save_warnings=True
+    )
     crash_file, warning_file = watchdog.check_and_reset_host()
     if crash_file:
         crash_state = "CRASH"
@@ -60,7 +63,9 @@ def print_fstest_host_status(host, verbose, use_remote, use_ssh, basedir, config
         crash_state = "WARNING"
 
     if not verbose:
-        soak_duration_seconds = int(config.get("CONFIG_FSTESTS_SOAK_DURATION", '0').strip('"'))
+        soak_duration_seconds = int(
+            config.get("CONFIG_FSTESTS_SOAK_DURATION", "0").strip('"')
+        )
         uses_soak = fstests.fstests_test_uses_soak_duration(last_test or "")
         is_soaking = uses_soak and soak_duration_seconds != 0
         soaking_str = "(soak)" if is_soaking else ""
@@ -83,30 +88,56 @@ def print_fstest_host_status(host, verbose, use_remote, use_ssh, basedir, config
     sys.stdout.write("Delta: %d total second\n" % (delta_seconds))
     sys.stdout.write("\t%d minutes\n" % (delta_seconds / 60))
     sys.stdout.write("\t%d seconds\n" % (delta_seconds % 60))
-    sys.stdout.write("Timeout-status: %s\n" % ("POSSIBLE-STALL" if stall_suspect else "OK"))
+    sys.stdout.write(
+        "Timeout-status: %s\n" % ("POSSIBLE-STALL" if stall_suspect else "OK")
+    )
     sys.stdout.write("Crash-status  : %s\n" % crash_state)
 
+
 def _main():
-    parser = argparse.ArgumentParser(description='fstest-watchdog')
-    parser.add_argument('hostfile', metavar='<ansible hostfile>', type=str,
-                        default='hosts',
-                        help='Ansible hostfile to use')
-    parser.add_argument('hostsection', metavar='<ansible hostsection>', type=str,
-                        default='baseline',
-                        help='The name of the section to read hosts from')
-    parser.add_argument('--verbose', const=True, default=False, action="store_const",
-                        help='Be verbose on output.')
-    parser.add_argument('--use-systemd-remote', const=True, default=True, action="store_const",
-                        help='Use systemd-remote uploaded journals if available')
-    parser.add_argument('--use-ssh', const=True, default=False, action="store_const",
-                        help='Force to only use ssh for journals.')
+    parser = argparse.ArgumentParser(description="fstest-watchdog")
+    parser.add_argument(
+        "hostfile",
+        metavar="<ansible hostfile>",
+        type=str,
+        default="hosts",
+        help="Ansible hostfile to use",
+    )
+    parser.add_argument(
+        "hostsection",
+        metavar="<ansible hostsection>",
+        type=str,
+        default="baseline",
+        help="The name of the section to read hosts from",
+    )
+    parser.add_argument(
+        "--verbose",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Be verbose on output.",
+    )
+    parser.add_argument(
+        "--use-systemd-remote",
+        const=True,
+        default=True,
+        action="store_const",
+        help="Use systemd-remote uploaded journals if available",
+    )
+    parser.add_argument(
+        "--use-ssh",
+        const=True,
+        default=False,
+        action="store_const",
+        help="Force to only use ssh for journals.",
+    )
     args = parser.parse_args()
 
     if not os.path.isfile(args.hostfile):
         sys.stdout.write("%s does not exist\n" % (args.hostfile))
         sys.exit(1)
 
-    dotconfig = os.path.dirname(os.path.abspath(args.hostfile)) + '/.config'
+    dotconfig = os.path.dirname(os.path.abspath(args.hostfile)) + "/.config"
     config = fstests.get_config(dotconfig)
     if not config:
         sys.stdout.write("%s does not exist\n" % (dotconfig))
@@ -119,11 +150,16 @@ def _main():
         if group is not None:
             remote_gid = group[2]
             if remote_gid not in os.getgrouplist(os.getlogin(), os.getgid()):
-                sys.stderr.write("Your username is not part of the group %s\n" % remote_group)
+                sys.stderr.write(
+                    "Your username is not part of the group %s\n" % remote_group
+                )
                 sys.stderr.write("Fix this and try again")
                 sys.exit(1)
         else:
-            sys.stderr.write("The group %s was not found, add Kconfig support for the systemd-remote-journal group used" % remote_group)
+            sys.stderr.write(
+                "The group %s was not found, add Kconfig support for the systemd-remote-journal group used"
+                % remote_group
+            )
             sys.exit(1)
 
     hosts = fstests.get_hosts(args.hostfile, args.hostsection)
@@ -133,13 +169,13 @@ def _main():
         f"{'Kernel':<38}  {'Crash-status':<10}\n"
     )
     for h in hosts:
-        print_fstest_host_status(h, args.verbose,
-                                 args.use_systemd_remote,
-                                 args.use_ssh,
-                                 basedir,
-                                 config)
+        print_fstest_host_status(
+            h, args.verbose, args.use_systemd_remote, args.use_ssh, basedir, config
+        )
 
-    soak_duration_seconds = int(config.get("CONFIG_FSTESTS_SOAK_DURATION", '0').strip('"'))
+    soak_duration_seconds = int(
+        config.get("CONFIG_FSTESTS_SOAK_DURATION", "0").strip('"')
+    )
     journal_method = "ssh"
     if "CONFIG_DEVCONFIG_ENABLE_SYSTEMD_JOURNAL_REMOTE" in config and not args.use_ssh:
         journal_method = "systemd-journal-remote"
@@ -147,5 +183,6 @@ def _main():
     sys.stdout.write("\n%25s%20s\n" % ("Journal-method", "Soak-duration(s)"))
     sys.stdout.write("%25s%20d\n" % (journal_method, soak_duration_seconds))
 
-if __name__ == '__main__':
+
+if __name__ == "__main__":
     ret = _main()
diff --git a/scripts/workflows/generic/crash_report.py b/scripts/workflows/generic/crash_report.py
index 10b4f958..65a05343 100755
--- a/scripts/workflows/generic/crash_report.py
+++ b/scripts/workflows/generic/crash_report.py
@@ -104,6 +104,8 @@ def generate_commit_log():
 
 if __name__ == "__main__":
     if not CRASH_DIR.exists():
-        print(f"No crashes, filesystem corruption isues, or kernel warnings were detected on this run.")
+        print(
+            f"No crashes, filesystem corruption isues, or kernel warnings were detected on this run."
+        )
         exit(0)
     generate_commit_log()
diff --git a/scripts/workflows/generic/crash_watchdog.py b/scripts/workflows/generic/crash_watchdog.py
index f271cf9c..15e92513 100755
--- a/scripts/workflows/generic/crash_watchdog.py
+++ b/scripts/workflows/generic/crash_watchdog.py
@@ -24,13 +24,16 @@ logging.basicConfig(
 )
 logger = logging.getLogger("crash_watchdog")
 
+
 def get_active_hosts():
     """Get the list of active hosts from kdevops configuration."""
     try:
         # First try to get the hosts from the ansible inventory
         result = subprocess.run(
             ["ansible-inventory", "-i", "hosts", "--list"],
-            capture_output=True, text=True, check=True
+            capture_output=True,
+            text=True,
+            check=True,
         )
         inventory = yaml.safe_load(result.stdout)
         hosts = inventory.get("baseline", {}).get("hosts", [])
@@ -39,6 +42,7 @@ def get_active_hosts():
         logger.error(f"Error getting active hosts: {e}")
         return []
 
+
 def run_crash_watchdog_on_host(args, this_host_name):
     watchdog = KernelCrashWatchdog(
         host_name=this_host_name,
@@ -46,13 +50,15 @@ def run_crash_watchdog_on_host(args, this_host_name):
         full_log=args.full_log,
         decode_crash=not args.no_decode,
         reset_host=not args.no_reset,
-        save_warnings = args.save_warnings,
+        save_warnings=args.save_warnings,
     )
 
     crashed = False
     warnings_found = False
 
-    crash_file, warning_file = watchdog.check_and_reset_host(method=args.method, get_fstests_log=args.fstests_log)
+    crash_file, warning_file = watchdog.check_and_reset_host(
+        method=args.method, get_fstests_log=args.fstests_log
+    )
 
     if warning_file:
         logger.warning(f"Kernel warning and logged to {warning_file}")
@@ -66,6 +72,7 @@ def run_crash_watchdog_on_host(args, this_host_name):
         logger.debug(f"No crash detected for host {this_host_name}")
     return crashed, [crash_file], warnings_found, warning_file
 
+
 def run_crash_watchdog_all_hosts(args):
     """Check all active hosts for kernel crashes."""
     hosts = get_active_hosts()
@@ -74,12 +81,12 @@ def run_crash_watchdog_all_hosts(args):
     warnings_detected = False
     warning_files = []
 
-    logger.info(
-        f"Checking {len(hosts)} hosts for kernel crashes: {', '.join(hosts)}"
-    )
+    logger.info(f"Checking {len(hosts)} hosts for kernel crashes: {', '.join(hosts)}")
 
     for host in hosts:
-        host_crash_detected, crash_file, host_warnings_detected, warnings_file = run_crash_watchdog_on_host(args, host)
+        host_crash_detected, crash_file, host_warnings_detected, warnings_file = (
+            run_crash_watchdog_on_host(args, host)
+        )
         if host_crash_detected and crash_file:
             crash_detected = True
             crash_files.append(crash_file)
@@ -87,10 +94,13 @@ def run_crash_watchdog_all_hosts(args):
         if host_warnings_detected and warnings_file:
             warnings_detected = True
             warning_files.append(warning_file)
-            logger.warning(f"Kernel warning found on host {host}, logs saved to {warning_file}")
+            logger.warning(
+                f"Kernel warning found on host {host}, logs saved to {warning_file}"
+            )
 
     return crash_detected, crash_files, warnings_detected, warning_files
 
+
 def write_log_section(f, title, files, label):
     f.write(f"# {title}\n\n")
     for path in files:
@@ -102,6 +112,7 @@ def write_log_section(f, title, files, label):
         except Exception as e:
             f.write(f"\nError reading {label.lower()} file: {e}\n\n")
 
+
 def main():
     parser = argparse.ArgumentParser(
         description="Detect and handle kernel crashes or kernel warnings in hosts.",
@@ -138,22 +149,45 @@ Examples:
   Get all kernel warnings only:
     ./crash_watchdog.py e3-ext4-2k --method remote --save-warnings sad.warn
         """,
-        formatter_class=argparse.RawTextHelpFormatter
+        formatter_class=argparse.RawTextHelpFormatter,
     )
 
-    parser.add_argument("--host-name", help="Optional name of the host to check", default="all")
-    parser.add_argument("--output-dir", help="Directory to store crash logs", default="crashes")
+    parser.add_argument(
+        "--host-name", help="Optional name of the host to check", default="all"
+    )
+    parser.add_argument(
+        "--output-dir", help="Directory to store crash logs", default="crashes"
+    )
     parser.add_argument(
         "--method",
         choices=["auto", "remote", "console", "ssh"],
         default="auto",
-        help="Choose method to collect logs: auto, remote, console, or ssh"
+        help="Choose method to collect logs: auto, remote, console, or ssh",
+    )
+    parser.add_argument(
+        "--full-log",
+        action="store_true",
+        help="Get full kernel log instead of only crash context",
+    )
+    parser.add_argument(
+        "--no-decode",
+        action="store_true",
+        help="Disable decoding crash logs with decode_stacktrace.sh",
+    )
+    parser.add_argument(
+        "--no-reset",
+        action="store_true",
+        help="Do not reset the guest even if a crash is detected",
+    )
+    parser.add_argument(
+        "--fstests-log",
+        help="Show all kernel log lines for a specific fstests test ID (e.g., generic/750)",
+    )
+    parser.add_argument(
+        "--save-warnings",
+        help="Do you want detected and save kernel warnings",
+        default=True,
     )
-    parser.add_argument("--full-log", action="store_true", help="Get full kernel log instead of only crash context")
-    parser.add_argument("--no-decode", action="store_true", help="Disable decoding crash logs with decode_stacktrace.sh")
-    parser.add_argument("--no-reset", action="store_true", help="Do not reset the guest even if a crash is detected")
-    parser.add_argument("--fstests-log", help="Show all kernel log lines for a specific fstests test ID (e.g., generic/750)")
-    parser.add_argument("--save-warnings", help="Do you want detected and save kernel warnings", default=True)
     args = parser.parse_args()
     crash_files = []
     warnings_files = []
@@ -164,10 +198,14 @@ Examples:
         args.save_warnings = False
         args.full_log_mode = True
 
-    if (args.host_name != "all"):
-        crash_detected, crash_files, warnings_detected, warnings_files = run_crash_watchdog_on_host(args, args.host_name)
+    if args.host_name != "all":
+        crash_detected, crash_files, warnings_detected, warnings_files = (
+            run_crash_watchdog_on_host(args, args.host_name)
+        )
     else:
-        crash_detected, crash_files, warnings_detected, warnings_files = run_crash_watchdog_all_hosts(args)
+        crash_detected, crash_files, warnings_detected, warnings_files = (
+            run_crash_watchdog_all_hosts(args)
+        )
 
     if warnings_detected:
         logger.warning("Kernel warnings detected in one or more hosts")
diff --git a/scripts/workflows/lib/blktests.py b/scripts/workflows/lib/blktests.py
index 6f156854..7945c750 100644
--- a/scripts/workflows/lib/blktests.py
+++ b/scripts/workflows/lib/blktests.py
@@ -7,18 +7,21 @@ import argparse
 import re
 from itertools import chain
 
+
 class BlktestsError(Exception):
     pass
 
+
 def blktests_check_pid(host):
     pid = kssh.first_process_name_pid(host, "check")
     if pid <= 0:
         return pid
-    dir = "/proc/" + str(pid)  + "/cwd/tests"
+    dir = "/proc/" + str(pid) + "/cwd/tests"
     if kssh.dir_exists(host, dir):
         return pid
     return 0
 
+
 def get_blktest_host(host, basedir, kernel, section, config):
     stall_suspect = False
     if kernel == "Uname-issue":
@@ -39,7 +42,7 @@ def get_blktest_host(host, basedir, kernel, section, config):
     last_test_time = latest_dmesg_blktest_line.split("at ")[1].rstrip()
     current_time_str = kssh.get_current_time(host).rstrip()
 
-    blktests_date_str_format = '%Y-%m-%d %H:%M:%S'
+    blktests_date_str_format = "%Y-%m-%d %H:%M:%S"
     d1 = datetime.strptime(last_test_time, blktests_date_str_format)
     d2 = datetime.strptime(current_time_str, blktests_date_str_format)
 
@@ -49,20 +52,26 @@ def get_blktest_host(host, basedir, kernel, section, config):
     if "CONFIG_BLKTESTS_WATCHDOG" not in config:
         enable_watchdog = False
     else:
-        enable_watchdog = config["CONFIG_BLKTESTS_WATCHDOG"].strip('\"')
+        enable_watchdog = config["CONFIG_BLKTESTS_WATCHDOG"].strip('"')
 
     if enable_watchdog:
-        max_new_test_time = config["CONFIG_BLKTESTS_WATCHDOG_MAX_NEW_TEST_TIME"].strip('\"')
+        max_new_test_time = config["CONFIG_BLKTESTS_WATCHDOG_MAX_NEW_TEST_TIME"].strip(
+            '"'
+        )
         max_new_test_time = int(max_new_test_time)
         if not max_new_test_time:
             max_new_test_time = 60
 
-        hung_multiplier_long_tests = config["CONFIG_BLKTESTS_WATCHDOG_HUNG_MULTIPLIER_LONG_TESTS"].strip('\"')
+        hung_multiplier_long_tests = config[
+            "CONFIG_BLKTESTS_WATCHDOG_HUNG_MULTIPLIER_LONG_TESTS"
+        ].strip('"')
         hung_multiplier_long_tests = int(hung_multiplier_long_tests)
         if not hung_multiplier_long_tests:
             hung_multiplier_long_tests = 10
 
-        hung_fast_test_max_time = config["CONFIG_BLKTESTS_WATCHDOG_HUNG_FAST_TEST_MAX_TIME"].strip('\"')
+        hung_fast_test_max_time = config[
+            "CONFIG_BLKTESTS_WATCHDOG_HUNG_FAST_TEST_MAX_TIME"
+        ].strip('"')
         hung_fast_test_max_time = int(hung_fast_test_max_time)
         if not hung_fast_test_max_time:
             hung_fast_test_max_time = 5
@@ -83,16 +92,21 @@ def get_blktest_host(host, basedir, kernel, section, config):
         # If a test typically takes between 1 second to 30 seconds we can likely
         # safely assume the system has crashed after hung_fast_test_max_time
         # minutes
-        elif last_run_time_s >  0:
+        elif last_run_time_s > 0:
             suspect_crash_time_seconds = 60 * hung_fast_test_max_time
 
-        if delta_seconds >= suspect_crash_time_seconds and 'blktestsstart/000' not in last_test and 'blktestsend/000' not in last_test:
+        if (
+            delta_seconds >= suspect_crash_time_seconds
+            and "blktestsstart/000" not in last_test
+            and "blktestsend/000" not in last_test
+        ):
             stall_suspect = True
 
     return (last_test, last_test_time, current_time_str, delta_seconds, stall_suspect)
 
+
 def get_last_run_time(host, basedir, kernel, section, last_test):
-    results_dir = basedir + '/workflows/blktests/results/last-run/'
+    results_dir = basedir + "/workflows/blktests/results/last-run/"
     if not last_test:
         return 0
     if not os.path.isdir(results_dir):
@@ -115,7 +129,7 @@ def get_last_run_time(host, basedir, kernel, section, last_test):
                 break
     if not ok_file:
         return 0
-    f = open(ok_file, 'r')
+    f = open(ok_file, "r")
     for line in f:
         if not "runtime" in line:
             continue
@@ -129,21 +143,28 @@ def get_last_run_time(host, basedir, kernel, section, last_test):
         return float(time_string_elems[0])
     return 0
 
+
 def get_config(dotconfig):
-    config = configparser.ConfigParser(allow_no_value=True, strict=False, interpolation=None)
+    config = configparser.ConfigParser(
+        allow_no_value=True, strict=False, interpolation=None
+    )
     with open(dotconfig) as lines:
         lines = chain(("[top]",), lines)
         config.read_file(lines)
         return config["top"]
     return None
 
+
 def get_section(host, config):
-    hostprefix = config["CONFIG_KDEVOPS_HOSTS_PREFIX"].strip('\"')
+    hostprefix = config["CONFIG_KDEVOPS_HOSTS_PREFIX"].strip('"')
     return host.split(hostprefix + "-")[1].replace("-", "_")
 
+
 def get_hosts(hostfile, hostsection):
     hosts = []
-    config = configparser.ConfigParser(allow_no_value=True, strict=False, interpolation=None)
+    config = configparser.ConfigParser(
+        allow_no_value=True, strict=False, interpolation=None
+    )
     config.read(hostfile)
     if hostsection not in config:
         return hosts
diff --git a/scripts/workflows/lib/crash.py b/scripts/workflows/lib/crash.py
index d9d121d1..5397d784 100755
--- a/scripts/workflows/lib/crash.py
+++ b/scripts/workflows/lib/crash.py
@@ -262,7 +262,7 @@ class KernelCrashWatchdog:
         save_warnings=False,
         context_prefix=0,
         context_postfix=35,
-        ssh_timeout = 180,
+        ssh_timeout=180,
     ):
         self.host_name = host_name
         self.output_dir = os.path.join(output_dir, host_name)
@@ -411,7 +411,7 @@ class KernelCrashWatchdog:
 
                     if not key_log_line:
                         logger.warning(f"Error getting key log line for {file_path}")
-                        continue;
+                        continue
                     # Use the first relevant line for any context
                     log_hash = hashlib.md5(key_log_line.encode()).hexdigest()
                     self.known_crashes.append(log_hash)
@@ -551,9 +551,7 @@ class KernelCrashWatchdog:
                 seconds = float(match.group(1))
                 wall_time = boot_time + timedelta(seconds=seconds)
                 timestamp = wall_time.strftime("%b %d %H:%M:%S")
-                converted_lines.append(
-                    f"{timestamp} {self.host_name} {match.group(2)}"
-                )
+                converted_lines.append(f"{timestamp} {self.host_name} {match.group(2)}")
             else:
                 converted_lines.append(line)
 
@@ -780,7 +778,9 @@ class KernelCrashWatchdog:
             )
             logger.info(f"{self.host_name} is now reachable.")
         except subprocess.TimeoutExpired:
-            logger.error(f"Timeout: SSH connection to {self.host_name} did not succeed within {self.ssh_timeout} seconds. This kernel is probably seriously broken.")
+            logger.error(
+                f"Timeout: SSH connection to {self.host_name} did not succeed within {self.ssh_timeout} seconds. This kernel is probably seriously broken."
+            )
             sys.exit(1)
         except subprocess.CalledProcessError as e:
             logger.warning(f"Failed to wait for SSH on {self.host_name}: {e}")
diff --git a/scripts/workflows/lib/fstests.py b/scripts/workflows/lib/fstests.py
index 328277a9..ee3fbe45 100644
--- a/scripts/workflows/lib/fstests.py
+++ b/scripts/workflows/lib/fstests.py
@@ -7,71 +7,75 @@ import configparser
 import argparse
 from itertools import chain
 
+
 class FstestsError(Exception):
     pass
 
+
 def fstests_check_pid(host):
     pid = kssh.first_process_name_pid(host, "check")
     if pid <= 0:
         return pid
-    dir = "/proc/" + str(pid)  + "/cwd/tests"
+    dir = "/proc/" + str(pid) + "/cwd/tests"
     if kssh.dir_exists(host, dir):
         return pid
     return 0
 
+
 # Later on we can automate this list with a git grep on the fstests
 # tests/ directory, and we inject this here.
 def fstests_test_uses_soak_duration(testname):
-    USES_SOAK_DURATION = [ "generic/019" ]
-    USES_SOAK_DURATION += [ "generic/388" ]
-    USES_SOAK_DURATION += [ "generic/475" ]
-    USES_SOAK_DURATION += [ "generic/476" ]
-    USES_SOAK_DURATION += [ "generic/521" ]
-    USES_SOAK_DURATION += [ "generic/522" ]
-    USES_SOAK_DURATION += [ "generic/616" ]
-    USES_SOAK_DURATION += [ "generic/617" ]
-    USES_SOAK_DURATION += [ "generic/642" ]
-    USES_SOAK_DURATION += [ "generic/650" ]
-    USES_SOAK_DURATION += [ "generic/648" ]
-    USES_SOAK_DURATION += [ "xfs/285" ]
-    USES_SOAK_DURATION += [ "xfs/517" ]
-    USES_SOAK_DURATION += [ "xfs/560" ]
-    USES_SOAK_DURATION += [ "xfs/561" ]
-    USES_SOAK_DURATION += [ "xfs/562" ]
-    USES_SOAK_DURATION += [ "xfs/565" ]
-    USES_SOAK_DURATION += [ "xfs/570" ]
-    USES_SOAK_DURATION += [ "xfs/571" ]
-    USES_SOAK_DURATION += [ "xfs/572" ]
-    USES_SOAK_DURATION += [ "xfs/573" ]
-    USES_SOAK_DURATION += [ "xfs/574" ]
-    USES_SOAK_DURATION += [ "xfs/575" ]
-    USES_SOAK_DURATION += [ "xfs/576" ]
-    USES_SOAK_DURATION += [ "xfs/577" ]
-    USES_SOAK_DURATION += [ "xfs/578" ]
-    USES_SOAK_DURATION += [ "xfs/579" ]
-    USES_SOAK_DURATION += [ "xfs/580" ]
-    USES_SOAK_DURATION += [ "xfs/581" ]
-    USES_SOAK_DURATION += [ "xfs/582" ]
-    USES_SOAK_DURATION += [ "xfs/583" ]
-    USES_SOAK_DURATION += [ "xfs/584" ]
-    USES_SOAK_DURATION += [ "xfs/585" ]
-    USES_SOAK_DURATION += [ "xfs/586" ]
-    USES_SOAK_DURATION += [ "xfs/587" ]
-    USES_SOAK_DURATION += [ "xfs/588" ]
-    USES_SOAK_DURATION += [ "xfs/589" ]
-    USES_SOAK_DURATION += [ "xfs/590" ]
-    USES_SOAK_DURATION += [ "xfs/591" ]
-    USES_SOAK_DURATION += [ "xfs/592" ]
-    USES_SOAK_DURATION += [ "xfs/593" ]
-    USES_SOAK_DURATION += [ "xfs/594" ]
-    USES_SOAK_DURATION += [ "xfs/595" ]
-    USES_SOAK_DURATION += [ "xfs/727" ]
-    USES_SOAK_DURATION += [ "xfs/729" ]
-    USES_SOAK_DURATION += [ "xfs/800" ]
+    USES_SOAK_DURATION = ["generic/019"]
+    USES_SOAK_DURATION += ["generic/388"]
+    USES_SOAK_DURATION += ["generic/475"]
+    USES_SOAK_DURATION += ["generic/476"]
+    USES_SOAK_DURATION += ["generic/521"]
+    USES_SOAK_DURATION += ["generic/522"]
+    USES_SOAK_DURATION += ["generic/616"]
+    USES_SOAK_DURATION += ["generic/617"]
+    USES_SOAK_DURATION += ["generic/642"]
+    USES_SOAK_DURATION += ["generic/650"]
+    USES_SOAK_DURATION += ["generic/648"]
+    USES_SOAK_DURATION += ["xfs/285"]
+    USES_SOAK_DURATION += ["xfs/517"]
+    USES_SOAK_DURATION += ["xfs/560"]
+    USES_SOAK_DURATION += ["xfs/561"]
+    USES_SOAK_DURATION += ["xfs/562"]
+    USES_SOAK_DURATION += ["xfs/565"]
+    USES_SOAK_DURATION += ["xfs/570"]
+    USES_SOAK_DURATION += ["xfs/571"]
+    USES_SOAK_DURATION += ["xfs/572"]
+    USES_SOAK_DURATION += ["xfs/573"]
+    USES_SOAK_DURATION += ["xfs/574"]
+    USES_SOAK_DURATION += ["xfs/575"]
+    USES_SOAK_DURATION += ["xfs/576"]
+    USES_SOAK_DURATION += ["xfs/577"]
+    USES_SOAK_DURATION += ["xfs/578"]
+    USES_SOAK_DURATION += ["xfs/579"]
+    USES_SOAK_DURATION += ["xfs/580"]
+    USES_SOAK_DURATION += ["xfs/581"]
+    USES_SOAK_DURATION += ["xfs/582"]
+    USES_SOAK_DURATION += ["xfs/583"]
+    USES_SOAK_DURATION += ["xfs/584"]
+    USES_SOAK_DURATION += ["xfs/585"]
+    USES_SOAK_DURATION += ["xfs/586"]
+    USES_SOAK_DURATION += ["xfs/587"]
+    USES_SOAK_DURATION += ["xfs/588"]
+    USES_SOAK_DURATION += ["xfs/589"]
+    USES_SOAK_DURATION += ["xfs/590"]
+    USES_SOAK_DURATION += ["xfs/591"]
+    USES_SOAK_DURATION += ["xfs/592"]
+    USES_SOAK_DURATION += ["xfs/593"]
+    USES_SOAK_DURATION += ["xfs/594"]
+    USES_SOAK_DURATION += ["xfs/595"]
+    USES_SOAK_DURATION += ["xfs/727"]
+    USES_SOAK_DURATION += ["xfs/729"]
+    USES_SOAK_DURATION += ["xfs/800"]
     if testname in USES_SOAK_DURATION:
         return True
     return False
 
+
 def get_fstest_host(use_remote, use_ssh, host, basedir, kernel, section, config):
     stall_suspect = False
     force_ssh = False
@@ -111,7 +115,7 @@ def get_fstest_host(use_remote, use_ssh, host, basedir, kernel, section, config)
     else:
         current_time_str = systemd_remote.get_current_time(host).rstrip()
 
-    fstests_date_str_format = '%Y-%m-%d %H:%M:%S'
+    fstests_date_str_format = "%Y-%m-%d %H:%M:%S"
     d1 = datetime.strptime(last_test_time, fstests_date_str_format)
     d2 = datetime.strptime(current_time_str, fstests_date_str_format)
 
@@ -120,31 +124,37 @@ def get_fstest_host(use_remote, use_ssh, host, basedir, kernel, section, config)
 
     soak_duration_seconds = 0
     if "CONFIG_FSTESTS_SOAK_DURATION" in config:
-        soak_duration_seconds = config["CONFIG_FSTESTS_SOAK_DURATION"].strip('\"')
+        soak_duration_seconds = config["CONFIG_FSTESTS_SOAK_DURATION"].strip('"')
         soak_duration_seconds = int(soak_duration_seconds)
 
     if "CONFIG_FSTESTS_WATCHDOG" not in config:
         enable_watchdog = False
     else:
-        enable_watchdog = config["CONFIG_FSTESTS_WATCHDOG"].strip('\"')
+        enable_watchdog = config["CONFIG_FSTESTS_WATCHDOG"].strip('"')
 
     if enable_watchdog:
-        max_new_test_time = config["CONFIG_FSTESTS_WATCHDOG_MAX_NEW_TEST_TIME"].strip('\"')
+        max_new_test_time = config["CONFIG_FSTESTS_WATCHDOG_MAX_NEW_TEST_TIME"].strip(
+            '"'
+        )
         max_new_test_time = int(max_new_test_time)
         if not max_new_test_time:
             max_new_test_time = 60
 
-        hung_multiplier_long_tests = config["CONFIG_FSTESTS_WATCHDOG_HUNG_MULTIPLIER_LONG_TESTS"].strip('\"')
+        hung_multiplier_long_tests = config[
+            "CONFIG_FSTESTS_WATCHDOG_HUNG_MULTIPLIER_LONG_TESTS"
+        ].strip('"')
         hung_multiplier_long_tests = int(hung_multiplier_long_tests)
         if not hung_multiplier_long_tests:
             hung_multiplier_long_tests = 10
 
-        hung_fast_test_max_time = config["CONFIG_FSTESTS_WATCHDOG_HUNG_FAST_TEST_MAX_TIME"].strip('\"')
+        hung_fast_test_max_time = config[
+            "CONFIG_FSTESTS_WATCHDOG_HUNG_FAST_TEST_MAX_TIME"
+        ].strip('"')
         hung_fast_test_max_time = int(hung_fast_test_max_time)
         if not hung_fast_test_max_time:
             hung_fast_test_max_time = 5
 
-        checktime =  get_checktime(host, basedir, kernel, section, last_test)
+        checktime = get_checktime(host, basedir, kernel, section, last_test)
 
         # If no known prior run time test is known we use a max. This only
         # applies to the first run.
@@ -160,23 +170,37 @@ def get_fstest_host(use_remote, use_ssh, host, basedir, kernel, section, config)
         # If a test typically takes between 1 second to 30 seconds we can likely
         # safely assume the system has crashed after hung_fast_test_max_time
         # minutes
-        elif checktime >  0:
+        elif checktime > 0:
             suspect_crash_time_seconds = 60 * hung_fast_test_max_time
 
         if fstests_test_uses_soak_duration(last_test):
             suspect_crash_time_seconds += soak_duration_seconds
 
-        if delta_seconds >= suspect_crash_time_seconds and 'fstestsstart/000' not in last_test and 'fstestsend/000' not in last_test:
+        if (
+            delta_seconds >= suspect_crash_time_seconds
+            and "fstestsstart/000" not in last_test
+            and "fstestsend/000" not in last_test
+        ):
             stall_suspect = True
 
     return (last_test, last_test_time, current_time_str, delta_seconds, stall_suspect)
 
+
 def get_checktime(host, basedir, kernel, section, last_test):
-    checktime_dir = basedir + '/workflows/fstests/results/' + host + '/' + kernel + '/' + section + '/'
-    checktime_file = checktime_dir + 'check.time'
+    checktime_dir = (
+        basedir
+        + "/workflows/fstests/results/"
+        + host
+        + "/"
+        + kernel
+        + "/"
+        + section
+        + "/"
+    )
+    checktime_file = checktime_dir + "check.time"
     if not os.path.isfile(checktime_file):
         return 0
-    cp = open(checktime_file, 'r')
+    cp = open(checktime_file, "r")
     for line in cp:
         elems = line.rstrip().split(" ")
         this_test = elems[0].rstrip().replace(" ", "")
@@ -184,21 +208,28 @@ def get_checktime(host, basedir, kernel, section, last_test):
             return int(elems[1])
     return 0
 
+
 def get_config(dotconfig):
-    config = configparser.ConfigParser(allow_no_value=True, strict=False, interpolation=None)
+    config = configparser.ConfigParser(
+        allow_no_value=True, strict=False, interpolation=None
+    )
     with open(dotconfig) as lines:
         lines = chain(("[top]",), lines)
         config.read_file(lines)
         return config["top"]
     return None
 
+
 def get_section(host, config):
-    hostprefix = config["CONFIG_KDEVOPS_HOSTS_PREFIX"].strip('\"')
+    hostprefix = config["CONFIG_KDEVOPS_HOSTS_PREFIX"].strip('"')
     return host.split(hostprefix + "-")[1].replace("-", "_")
 
+
 def get_hosts(hostfile, hostsection):
     hosts = []
-    config = configparser.ConfigParser(allow_no_value=True, strict=False, interpolation=None)
+    config = configparser.ConfigParser(
+        allow_no_value=True, strict=False, interpolation=None
+    )
     config.read(hostfile)
     if hostsection not in config:
         return hosts
diff --git a/scripts/workflows/lib/kssh.py b/scripts/workflows/lib/kssh.py
index 11472767..a9b4b29d 100644
--- a/scripts/workflows/lib/kssh.py
+++ b/scripts/workflows/lib/kssh.py
@@ -2,28 +2,36 @@
 
 import subprocess, os
 
+
 class KsshError(Exception):
     pass
+
+
 class ExecutionError(KsshError):
     def __init__(self, errcode):
         self.error_code = errcode
+
+
 class TimeoutExpired(KsshError):
     def __init__(self, errcode):
         self.error_code = errcode
         return "timeout"
 
+
 def _check(process):
     if process.returncode != 0:
         raise ExecutionError(process.returncode)
 
+
 def dir_exists(host, dirname):
-    cmd = ['ssh', host,
-           'sudo',
-           'ls', '-ld',
-           dirname ]
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
-                               close_fds=True, universal_newlines=True)
+    cmd = ["ssh", host, "sudo", "ls", "-ld", dirname]
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
@@ -37,17 +45,35 @@ def dir_exists(host, dirname):
         else:
             return False
 
+
 def first_process_name_pid(host, process_name):
-    cmd = ['ssh', host,
-           'sudo',
-           'ps', '-ef',
-           '|', 'grep', '-v', 'grep',
-           '|', 'grep', process_name,
-           '|', 'awk', '\'{print $2}\'',
-           '|', 'tail', '-1' ]
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
-                               close_fds=True, universal_newlines=True)
+    cmd = [
+        "ssh",
+        host,
+        "sudo",
+        "ps",
+        "-ef",
+        "|",
+        "grep",
+        "-v",
+        "grep",
+        "|",
+        "grep",
+        process_name,
+        "|",
+        "awk",
+        "'{print $2}'",
+        "|",
+        "tail",
+        "-1",
+    ]
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
@@ -62,14 +88,16 @@ def first_process_name_pid(host, process_name):
             return 0
         return int(stdout)
 
+
 def prog_exists(host, prog):
-    cmd = ['ssh', host,
-           'sudo',
-           'which',
-           prog ]
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
-                               close_fds=True, universal_newlines=True)
+    cmd = ["ssh", host, "sudo", "which", prog]
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
@@ -82,11 +110,16 @@ def prog_exists(host, prog):
             return False
         return True
 
+
 def get_uname(host):
-    cmd = ['ssh', host, 'uname', '-r' ]
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
-                               close_fds=True, universal_newlines=True)
+    cmd = ["ssh", host, "uname", "-r"]
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
@@ -99,28 +132,49 @@ def get_uname(host):
             return "Uname-issue"
         return stdout
 
+
 def get_test(host, suite):
-    if suite not in [ 'fstests', 'blktests']:
+    if suite not in ["fstests", "blktests"]:
         return None
     run_string = "run " + suite
-    cmd = ['ssh', host,
-           'sudo',
-           'dmesg',
-           '|', 'grep', '"' + run_string + '"',
-           '|', 'awk', '-F"' + run_string + ' "', '\'{print $2}\'',
-           '|', 'tail', '-1' ]
-    if prog_exists(host, 'journalctl'):
-        cmd = ['ssh', host,
-               'sudo',
-               'journalctl',
-               '-k',
-               '-g'
-               '"' + run_string + '"'
-               '|', 'awk', '-F"' + run_string + ' "', '\'{print $2}\'',
-               '|', 'tail', '-1' ]
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
-                               close_fds=True, universal_newlines=True)
+    cmd = [
+        "ssh",
+        host,
+        "sudo",
+        "dmesg",
+        "|",
+        "grep",
+        '"' + run_string + '"',
+        "|",
+        "awk",
+        '-F"' + run_string + ' "',
+        "'{print $2}'",
+        "|",
+        "tail",
+        "-1",
+    ]
+    if prog_exists(host, "journalctl"):
+        cmd = [
+            "ssh",
+            host,
+            "sudo",
+            "journalctl",
+            "-k",
+            "-g" '"' + run_string + '"' "|",
+            "awk",
+            '-F"' + run_string + ' "',
+            "'{print $2}'",
+            "|",
+            "tail",
+            "-1",
+        ]
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
@@ -139,19 +193,33 @@ def get_test(host, suite):
 
         return stdout
 
+
 def get_last_fstest(host):
-    return get_test(host, 'fstests')
+    return get_test(host, "fstests")
+
 
 def get_last_blktest(host):
-    return get_test(host, 'blktests')
+    return get_test(host, "blktests")
+
 
 def get_current_time(host):
-    cmd = ['ssh', host,
-           'date', '--rfc-3339=\'seconds\'',
-           '|', 'awk', '-F"+"', '\'{print $1}\'' ]
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
-                               close_fds=True, universal_newlines=True)
+    cmd = [
+        "ssh",
+        host,
+        "date",
+        "--rfc-3339='seconds'",
+        "|",
+        "awk",
+        '-F"+"',
+        "'{print $1}'",
+    ]
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
diff --git a/scripts/workflows/lib/systemd_remote.py b/scripts/workflows/lib/systemd_remote.py
index c95ca7e5..1a65de9f 100644
--- a/scripts/workflows/lib/systemd_remote.py
+++ b/scripts/workflows/lib/systemd_remote.py
@@ -3,19 +3,27 @@
 import subprocess, os, sys
 from datetime import datetime
 
+
 class SystemdError(Exception):
     pass
+
+
 class ExecutionError(SystemdError):
     def __init__(self, errcode):
         self.error_code = errcode
+
+
 class TimeoutExpired(SystemdError):
     def __init__(self, errcode):
         self.error_code = errcode
         return "timeout"
 
+
 def get_host_ip(host):
     try:
-        result = subprocess.run(["ssh", "-G", host], capture_output=True, text=True, check=True)
+        result = subprocess.run(
+            ["ssh", "-G", host], capture_output=True, text=True, check=True
+        )
         for line in result.stdout.splitlines():
             if line.startswith("hostname "):
                 return line.split()[1]
@@ -23,15 +31,17 @@ def get_host_ip(host):
         logger.warning(f"Failed to resolve IP for {host}: {e}")
     return None
 
+
 def get_current_time(host):
-    format = '%Y-%m-%d %H:%M:%S'
+    format = "%Y-%m-%d %H:%M:%S"
     today = datetime.today()
     today_str = today.strftime(format)
     return today_str
 
+
 def get_extra_journals(remote_path, host):
     ip = get_host_ip(host)
-    extra_journals_path = "remote-" + ip + '@'
+    extra_journals_path = "remote-" + ip + "@"
     extra_journals = []
     for file in os.listdir(remote_path):
         if extra_journals_path in file:
@@ -39,37 +49,33 @@ def get_extra_journals(remote_path, host):
             extra_journals.append(remote_path + file)
     return extra_journals
 
+
 def get_uname(remote_path, host, configured_kernel):
     ip = get_host_ip(host)
     extra_journals = get_extra_journals(remote_path, host)
-    fpath = remote_path + "remote-" + ip + '.journal'
+    fpath = remote_path + "remote-" + ip + ".journal"
     grep = "Linux version"
-    grep_str = "\"Linux version\""
-    cmd = [
-           'journalctl',
-           '--no-pager',
-           '-n 1',
-           '-k',
-           '-g',
-           grep,
-           '--file',
-           fpath ]
+    grep_str = '"Linux version"'
+    cmd = ["journalctl", "--no-pager", "-n 1", "-k", "-g", grep, "--file", fpath]
     cmd = cmd + extra_journals
     cmd_verbose = [
-           'journalctl',
-           '--no-pager',
-           '-n 1',
-           '-k',
-           '-g',
-           grep_str,
-           '--file',
-           fpath ]
+        "journalctl",
+        "--no-pager",
+        "-n 1",
+        "-k",
+        "-g",
+        grep_str,
+        "--file",
+        fpath,
+    ]
     cmd_verbose = cmd_verbose + extra_journals
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE,
-                               stderr=subprocess.STDOUT,
-                               close_fds=True,
-                               universal_newlines=True)
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
@@ -88,42 +94,41 @@ def get_uname(remote_path, host, configured_kernel):
             sys.stderr.write("\nCommand used:\n%s\n\n" % " ".join(cmd_verbose))
             return None
         if len(last_line.split(grep)) <= 1:
-            sys.stderr.write("\nThe string %s could not be used to split the line." % grep_str)
+            sys.stderr.write(
+                "\nThe string %s could not be used to split the line." % grep_str
+            )
             sys.stderr.write("\nCommand used:\n%s\n\n" % " ".join(cmd_verbose))
             return None
         kernel_line = last_line.split(grep)[1].strip()
         if len(last_line.split()) <= 1:
-            sys.stderr.write("\nThe string %s was used but could not find kernel version." % grep_str)
+            sys.stderr.write(
+                "\nThe string %s was used but could not find kernel version." % grep_str
+            )
             sys.stderr.write("\nCommand used:\n%s\n\n" % " ".join(cmd_verbose))
             return None
         kernel = kernel_line.split()[0].strip()
 
         return kernel
 
+
 # Returns something like "xfs/040 at 2023-12-17 23:52:14"
 def get_test(remote_path, host, suite):
     ip = get_host_ip(host)
-    if suite not in [ 'fstests', 'blktests']:
+    if suite not in ["fstests", "blktests"]:
         return None
     # Example: /var/log/journal/remote/remote-line-xfs-reflink.journal
-    fpath = remote_path + "remote-" + ip + '.journal'
+    fpath = remote_path + "remote-" + ip + ".journal"
     extra_journals = get_extra_journals(remote_path, host)
     run_string = "run " + suite
-    cmd = [
-           'journalctl',
-           '--no-pager',
-           '-n 1',
-           '-k',
-           '-g',
-           run_string,
-           '--file',
-           fpath ]
+    cmd = ["journalctl", "--no-pager", "-n 1", "-k", "-g", run_string, "--file", fpath]
     cmd = cmd + extra_journals
-    process = subprocess.Popen(cmd,
-                               stdout=subprocess.PIPE,
-                               stderr=subprocess.STDOUT,
-                               close_fds=True,
-                               universal_newlines=True)
+    process = subprocess.Popen(
+        cmd,
+        stdout=subprocess.PIPE,
+        stderr=subprocess.STDOUT,
+        close_fds=True,
+        universal_newlines=True,
+    )
     data = None
     try:
         data = process.communicate(timeout=120)
@@ -142,8 +147,10 @@ def get_test(remote_path, host, suite):
 
         return test_line
 
+
 def get_last_fstest(remote_path, host):
-    return get_test(remote_path, host, 'fstests')
+    return get_test(remote_path, host, "fstests")
+
 
 def get_last_blktest(remote_path, host):
-    return get_test(remote_path, host, 'blktests')
+    return get_test(remote_path, host, "blktests")
diff --git a/scripts/workflows/pynfs/check_pynfs_results.py b/scripts/workflows/pynfs/check_pynfs_results.py
index 724d2ace..753d177f 100755
--- a/scripts/workflows/pynfs/check_pynfs_results.py
+++ b/scripts/workflows/pynfs/check_pynfs_results.py
@@ -12,20 +12,21 @@ import json
 import sys
 import pprint
 
+
 def main():
     base = json.load(open(sys.argv[1]))
     result = json.load(open(sys.argv[2]))
 
     failures = {}
 
-    for case in result['testcase']:
-        if 'failure' in case:
-            failures[case['code']] = case
+    for case in result["testcase"]:
+        if "failure" in case:
+            failures[case["code"]] = case
 
-    for case in base['testcase']:
-        if 'failure' in case:
-            if case['code'] in failures:
-                del failures[case['code']]
+    for case in base["testcase"]:
+        if "failure" in case:
+            if case["code"] in failures:
+                del failures[case["code"]]
 
     if len(failures) != 0:
         pprint.pprint(failures)
@@ -33,6 +34,6 @@ def main():
     else:
         sys.exit(0)
 
+
 if __name__ == "__main__":
     main()
-
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 8/9] devconfig: add automatic APT mirror fallback for Debian testing
  2025-07-30  6:01 [PATCH v2 0/9] kdevops: add support for A/B testing Luis Chamberlain
                   ` (6 preceding siblings ...)
  2025-07-30  6:01 ` [PATCH v2 7/9] all: run black Luis Chamberlain
@ 2025-07-30  6:01 ` Luis Chamberlain
  2025-07-30  6:41   ` Daniel Gomez
  2025-07-30  6:01 ` [PATCH v2 9/9] bootlinux: add support for A/B kernel testing Luis Chamberlain
  8 siblings, 1 reply; 19+ messages in thread
From: Luis Chamberlain @ 2025-07-30  6:01 UTC (permalink / raw)
  To: Chuck Lever, Daniel Gomez, kdevops; +Cc: Luis Chamberlain

Debian testing (trixie) VMs can fail to provision when configured APT
mirrors become unavailable or unresponsive. This is particularly common
with local or regional mirrors that may have intermittent connectivity
issues.

This fix adds automatic mirror health checking specifically for Debian
testing systems. The implementation:

1. Extracts the currently configured APT mirror hostname
2. Tests connectivity to the mirror on port 80 with a 10 second timeout
3. Falls back to official Debian mirrors if the test fails
4. Backs up the original sources.list before making changes
5. Updates the APT cache after switching mirrors
6. Provides clear user notification about the fallback

The check only runs on Debian testing systems where devconfig_debian_testing
is set to true, avoiding any impact on stable Debian or other distributions.

This ensures that Debian testing VMs can successfully provision even when
the initially configured mirror is unavailable, improving reliability for
development and testing workflows.

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 .../devconfig/tasks/check-apt-mirrors.yml     | 63 +++++++++++++++++++
 playbooks/roles/devconfig/tasks/main.yml      |  8 +++
 .../debian-testing-fallback-sources.list      | 10 +++
 3 files changed, 81 insertions(+)
 create mode 100644 playbooks/roles/devconfig/tasks/check-apt-mirrors.yml
 create mode 100644 playbooks/roles/devconfig/templates/debian-testing-fallback-sources.list

diff --git a/playbooks/roles/devconfig/tasks/check-apt-mirrors.yml b/playbooks/roles/devconfig/tasks/check-apt-mirrors.yml
new file mode 100644
index 00000000..02e0c800
--- /dev/null
+++ b/playbooks/roles/devconfig/tasks/check-apt-mirrors.yml
@@ -0,0 +1,63 @@
+---
+# Only run mirror checks for Debian testing (trixie) where mirror issues are common
+- name: Extract current APT mirror hostname
+  shell: |
+    grep -E "^deb\s+http" /etc/apt/sources.list | head -1 | awk '{print $2}' | sed 's|http://||' | cut -d'/' -f1
+  register: apt_mirror_host
+  changed_when: false
+  ignore_errors: yes
+
+- name: Check connectivity to current APT mirror
+  wait_for:
+    host: "{{ apt_mirror_host.stdout }}"
+    port: 80
+    timeout: 10
+  register: mirror_connectivity
+  ignore_errors: yes
+  when: apt_mirror_host.stdout != ""
+
+- name: Display mirror check results
+  debug:
+    msg: |
+      Current APT mirror: {{ apt_mirror_host.stdout | default('Not found') }}
+      Mirror connectivity: {{ 'OK' if mirror_connectivity is not failed else 'FAILED' }}
+  when: apt_mirror_host.stdout != ""
+
+- name: Fall back to official Debian mirrors if current mirror fails
+  block:
+    - name: Backup current sources.list
+      copy:
+        src: /etc/apt/sources.list
+        dest: /etc/apt/sources.list.backup
+        remote_src: yes
+      become: yes
+
+    - name: Apply Debian testing fallback sources
+      template:
+        src: debian-testing-fallback-sources.list
+        dest: /etc/apt/sources.list
+        owner: root
+        group: root
+        mode: '0644'
+      become: yes
+
+    - name: Update APT cache after mirror change
+      apt:
+        update_cache: yes
+        cache_valid_time: 0
+      become: yes
+
+    - name: Inform user about mirror fallback
+      debug:
+        msg: |
+          WARNING: The configured APT mirror '{{ apt_mirror_host.stdout }}' is not accessible.
+          Falling back to official Debian testing mirrors:
+          - deb.debian.org for main packages
+          - security.debian.org for security updates
+
+          This may result in slower package downloads depending on your location.
+          Consider configuring a local mirror for better performance.
+
+  when:
+    - apt_mirror_host.stdout != ""
+    - mirror_connectivity is failed
diff --git a/playbooks/roles/devconfig/tasks/main.yml b/playbooks/roles/devconfig/tasks/main.yml
index 656d5389..ceb0f2e8 100644
--- a/playbooks/roles/devconfig/tasks/main.yml
+++ b/playbooks/roles/devconfig/tasks/main.yml
@@ -30,6 +30,14 @@
   tags: hostname
 
 # Distro specific
+
+# Check and fix APT mirrors for Debian testing before installing dependencies
+- name: Check and fix APT mirrors for Debian testing
+  include_tasks: check-apt-mirrors.yml
+  when:
+    - devconfig_debian_testing is defined
+    - devconfig_debian_testing | bool
+
 - name: Install dependencies
   ansible.builtin.include_tasks: install-deps/main.yml
   tags: ['vars', 'vars_simple']
diff --git a/playbooks/roles/devconfig/templates/debian-testing-fallback-sources.list b/playbooks/roles/devconfig/templates/debian-testing-fallback-sources.list
new file mode 100644
index 00000000..456ed60f
--- /dev/null
+++ b/playbooks/roles/devconfig/templates/debian-testing-fallback-sources.list
@@ -0,0 +1,10 @@
+deb http://deb.debian.org/debian testing main contrib non-free non-free-firmware
+deb-src http://deb.debian.org/debian testing main contrib non-free non-free-firmware
+
+# Security updates
+deb http://security.debian.org/debian-security testing-security main contrib non-free non-free-firmware
+deb-src http://security.debian.org/debian-security testing-security main contrib non-free non-free-firmware
+
+# Updates (if available for testing)
+deb http://deb.debian.org/debian testing-updates main contrib non-free non-free-firmware
+deb-src http://deb.debian.org/debian testing-updates main contrib non-free non-free-firmware
\ No newline at end of file
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 9/9] bootlinux: add support for A/B kernel testing
  2025-07-30  6:01 [PATCH v2 0/9] kdevops: add support for A/B testing Luis Chamberlain
                   ` (7 preceding siblings ...)
  2025-07-30  6:01 ` [PATCH v2 8/9] devconfig: add automatic APT mirror fallback for Debian testing Luis Chamberlain
@ 2025-07-30  6:01 ` Luis Chamberlain
  8 siblings, 0 replies; 19+ messages in thread
From: Luis Chamberlain @ 2025-07-30  6:01 UTC (permalink / raw)
  To: Chuck Lever, Daniel Gomez, kdevops; +Cc: Luis Chamberlain

Right now we use the same kernel for all target nodes. We want to
compare and contrast different kenrels for different features. We
add support for A/B testing by leveraging the baseline and dev groups
provided to us by KDEVOPS_BASELINE_AND_DEV.

This extends the bootlinux playbook by enabling us to allow a different
kernel tree / ref to be used for the dev group. This just becomes a
configuration thing. The targets are intuitive:

  make linux                 # Handles A/B compilation transparently
  make linux-baseline        # Build and install baseline kernel only
  make linux-dev             # Build and install development kernel only

We also add a simple check to verify all different build types end
up respecting different kernels if we so choose:

  make check-linux-ab

This does not launch targets it just verifies we don't regress in the
future with the different ref tags.

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 .github/workflows/linux-ab-testing.yml       | 217 +++++++++++++++++++
 .github/workflows/linux-ab.yml               |  47 ++++
 Makefile                                     |   1 +
 PROMPTS.md                                   |  52 +++++
 defconfigs/linux-ab-testing                  |  14 ++
 defconfigs/linux-ab-testing-9p               |  15 ++
 defconfigs/linux-ab-testing-builder          |  15 ++
 defconfigs/linux-ab-testing-target           |  15 ++
 docs/kdevops-make-linux.md                   | 158 ++++++++++++++
 playbooks/roles/bootlinux/defaults/main.yml  |  14 ++
 playbooks/roles/bootlinux/tasks/build/9p.yml |  20 +-
 playbooks/roles/bootlinux/tasks/main.yml     | 112 ++++++++++
 scripts/infer_last_stable_kernel.sh          |  35 +++
 scripts/linux-ab-testing.Makefile            |  51 +++++
 scripts/test-linux-ab-config.py              | 182 ++++++++++++++++
 scripts/test-linux-ab.sh                     | 213 ++++++++++++++++++
 workflows/linux/Kconfig                      | 102 ++++++++-
 workflows/linux/Makefile                     |  39 ++++
 18 files changed, 1291 insertions(+), 11 deletions(-)
 create mode 100644 .github/workflows/linux-ab-testing.yml
 create mode 100644 .github/workflows/linux-ab.yml
 create mode 100644 defconfigs/linux-ab-testing
 create mode 100644 defconfigs/linux-ab-testing-9p
 create mode 100644 defconfigs/linux-ab-testing-builder
 create mode 100644 defconfigs/linux-ab-testing-target
 create mode 100755 scripts/infer_last_stable_kernel.sh
 create mode 100644 scripts/linux-ab-testing.Makefile
 create mode 100755 scripts/test-linux-ab-config.py
 create mode 100755 scripts/test-linux-ab.sh

diff --git a/.github/workflows/linux-ab-testing.yml b/.github/workflows/linux-ab-testing.yml
new file mode 100644
index 00000000..aa52abbb
--- /dev/null
+++ b/.github/workflows/linux-ab-testing.yml
@@ -0,0 +1,217 @@
+name: Linux A/B Testing Verification
+
+on:
+  push:
+    branches:
+      - '**'
+  pull_request:
+    branches:
+      - '**'
+  workflow_dispatch:  # Allow manual triggering
+
+jobs:
+  linux-ab-testing-verification:
+    name: Verify Linux A/B Testing Variables
+    runs-on: ubuntu-latest
+    strategy:
+      matrix:
+        distro_container:
+          - debian:testing
+          - fedora:latest
+        build_method:
+          - target
+          - 9p
+          - builder
+
+    container: ${{ matrix.distro_container }}
+    steps:
+      - name: Document test environment
+        run: |
+          echo "Running Linux A/B testing verification on ${{ matrix.distro_container }}"
+          echo "Build method: ${{ matrix.build_method }}"
+          uname -a
+
+      - name: Install dependencies
+        run: |
+          if [ "${{ matrix.distro_container }}" = "debian:testing" ]; then
+            echo "Installing packages for Debian"
+            apt-get update
+            apt-get install -y ansible-core make gcc ncurses-dev bison flex git python3
+          elif [ "${{ matrix.distro_container }}" = "fedora:latest" ]; then
+            echo "Installing packages for Fedora"
+            dnf install -y ansible make gcc ncurses-devel bison flex git python3
+          else
+            echo "Unknown distribution: ${{ matrix.distro_container }}"
+            exit 1
+          fi
+
+      - name: Checkout repository
+        uses: actions/checkout@v4
+
+      - name: Configure git for kdevops
+        run: |
+          git config --global --add safe.directory '*'
+          git config --global user.name "kdevops-ci"
+          git config --global user.email "kdevops@lists.linux.dev"
+
+      - name: Apply A/B testing defconfig
+        run: |
+          echo "Applying linux-ab-testing-${{ matrix.build_method }} defconfig"
+          make defconfig-linux-ab-testing-${{ matrix.build_method }}
+
+          # Verify configuration was applied correctly
+          echo "=== Verifying A/B testing configuration ==="
+          grep -E "CONFIG_KDEVOPS_BASELINE_AND_DEV=y" .config || exit 1
+          grep -E "CONFIG_BOOTLINUX_AB_DIFFERENT_REF=y" .config || exit 1
+
+          # Check build method specific configs
+          case "${{ matrix.build_method }}" in
+            target)
+              grep -E "CONFIG_BOOTLINUX_TARGET=y" .config || exit 1
+              ;;
+            9p)
+              grep -E "CONFIG_BOOTLINUX_9P=y" .config || exit 1
+              ;;
+            builder)
+              grep -E "CONFIG_BOOTLINUX_BUILDER=y" .config || exit 1
+              ;;
+          esac
+
+      - name: Run make to generate extra_vars.yaml
+        run: |
+          make
+
+      - name: Extract and verify kernel references
+        run: |
+          echo "=== Extracting kernel references from configuration ==="
+
+          # Get the baseline ref (should be master or main)
+          BASELINE_REF=$(grep "^bootlinux_tree_ref:" extra_vars.yaml | awk '{print $2}')
+          echo "Baseline ref: $BASELINE_REF"
+
+          # Get the dev ref using the inference script
+          DEV_REF=$(grep "^bootlinux_dev_tree_ref:" extra_vars.yaml | awk '{print $2}')
+          echo "Dev ref from config: $DEV_REF"
+
+          # Since we're in a container without /mirror/linux.git, the script will fallback
+          # For CI, we'll simulate getting the latest stable tag
+          if [ -f scripts/infer_last_stable_kernel.sh ]; then
+            INFERRED_STABLE=$(./scripts/infer_last_stable_kernel.sh 2>/dev/null || echo "v6.12")
+            echo "Inferred stable version: $INFERRED_STABLE"
+          fi
+
+          # Verify refs are different
+          if [ "$BASELINE_REF" = "$DEV_REF" ]; then
+            echo "ERROR: Baseline and dev refs should be different for A/B testing"
+            exit 1
+          fi
+
+          # Store refs for later verification
+          echo "BASELINE_REF=$BASELINE_REF" >> $GITHUB_ENV
+          echo "DEV_REF=$DEV_REF" >> $GITHUB_ENV
+
+      - name: Test debug functionality with Ansible
+        run: |
+          echo "=== Testing debug output with DEBUG_REF=1 ==="
+
+          # Create a minimal hosts file for container testing
+          cat > hosts << EOF
+          [all]
+          localhost ansible_connection=local
+
+          [baseline]
+          localhost ansible_connection=local
+
+          [dev]
+          EOF
+
+          # Run the bootlinux playbook with debug enabled and capture output
+          export DEBUG_REF=1
+          ansible-playbook -i hosts playbooks/bootlinux.yml --tags vars,debug -v > debug_output.txt 2>&1 || true
+
+          # Verify debug output based on build method
+          case "${{ matrix.build_method }}" in
+            9p)
+              echo "=== Verifying 9P debug output (localhost context) ==="
+              if grep -q "active_linux_ref" debug_output.txt; then
+                echo "✓ Found active_linux_ref in 9P debug output"
+              else
+                echo "✗ Missing active_linux_ref in 9P debug output"
+                cat debug_output.txt
+                exit 1
+              fi
+              ;;
+            target|builder)
+              echo "=== Verifying non-9P debug output (per-node context) ==="
+              if grep -q "target_linux_ref" debug_output.txt; then
+                echo "✓ Found target_linux_ref in non-9P debug output"
+              else
+                echo "✗ Missing target_linux_ref in non-9P debug output"
+                cat debug_output.txt
+                exit 1
+              fi
+              ;;
+          esac
+
+      - name: Verify A/B testing Makefile rules
+        run: |
+          echo "=== Verifying A/B testing Makefile structure ==="
+
+          # Check that linux target depends on linux-baseline and linux-dev
+          if grep -A5 "^linux:" workflows/linux/Makefile | grep -q "linux-baseline linux-dev"; then
+            echo "✓ Makefile has correct A/B testing dependencies"
+          else
+            echo "✗ Makefile missing A/B testing dependencies"
+            exit 1
+          fi
+
+          # Verify linux-baseline and linux-dev targets exist
+          if grep -q "^linux-baseline:" workflows/linux/Makefile && \
+             grep -q "^linux-dev:" workflows/linux/Makefile; then
+            echo "✓ Both linux-baseline and linux-dev targets exist"
+          else
+            echo "✗ Missing linux-baseline or linux-dev targets"
+            exit 1
+          fi
+
+      - name: Test variable resolution patterns
+        run: |
+          echo "=== Testing variable resolution patterns ==="
+
+          # Create test playbook to verify variable resolution
+          cat > test_vars.yml << 'EOF'
+          ---
+          - hosts: localhost
+            connection: local
+            tasks:
+              - name: Load extra vars
+                include_vars: extra_vars.yaml
+
+              - name: Display loaded variables
+                debug:
+                  msg: |
+                    bootlinux_tree_ref: {{ bootlinux_tree_ref | default('undefined') }}
+                    bootlinux_dev_tree_ref: {{ bootlinux_dev_tree_ref | default('undefined') }}
+                    kdevops_baseline_and_dev: {{ kdevops_baseline_and_dev | default(false) }}
+                    bootlinux_ab_different_ref: {{ bootlinux_ab_different_ref | default(false) }}
+          EOF
+
+          ansible-playbook test_vars.yml -v
+
+      - name: Summary report
+        if: always()
+        run: |
+          echo "=== A/B Testing Verification Summary ==="
+          echo "Distribution: ${{ matrix.distro_container }}"
+          echo "Build Method: ${{ matrix.build_method }}"
+          echo "Baseline Ref: ${BASELINE_REF:-not set}"
+          echo "Dev Ref: ${DEV_REF:-not set}"
+          echo ""
+
+          if [ -f .config ]; then
+            echo "Key configurations:"
+            grep -E "(BASELINE_AND_DEV|AB_DIFFERENT_REF|BOOTLINUX_TARGET|BOOTLINUX_9P|BOOTLINUX_BUILDER)" .config | head -10
+          fi
+
+          echo ""
+          echo "Test completed successfully ✓"
diff --git a/.github/workflows/linux-ab.yml b/.github/workflows/linux-ab.yml
new file mode 100644
index 00000000..9162c887
--- /dev/null
+++ b/.github/workflows/linux-ab.yml
@@ -0,0 +1,47 @@
+name: Run kdevops linux-ab tests on self-hosted runner
+
+on:
+  push:
+    branches:
+      - '**'
+  pull_request:
+    branches:
+      - '**'
+  workflow_dispatch:  # Add this for manual triggering of the workflow
+
+jobs:
+  run-kdevops:
+    name: Run kdevops CI
+    runs-on: [self-hosted, Linux, X64]
+    steps:
+      - name: Checkout repository
+        uses: actions/checkout@v4
+
+      - name: Set CI metadata for kdevops-results-archive
+        run: |
+          echo "$(basename ${{ github.repository }})" > ci.trigger
+          git log -1 --pretty=format:"%s" > ci.subject
+          # Start out pessimistic
+          echo "not ok" > ci.result
+          echo "Nothing to write home about." > ci.commit_extra
+
+      - name: Set kdevops path
+        run: echo "KDEVOPS_PATH=$GITHUB_WORKSPACE" >> $GITHUB_ENV
+
+      - name: Configure git
+        run: |
+          git config --global --add safe.directory '*'
+          git config --global user.name "kdevops"
+          git config --global user.email "kdevops@lists.linux.dev"
+
+      - name: Run kdevops check-linux-ab
+        run: |
+          make check-linux-ab
+          echo "ok" > ci.result
+
+      # Ensure make destroy always runs, even on failure
+      - name: Run kdevops make destroy
+        if: always()  # This ensures the step runs even if previous steps failed
+        run: |
+          make destroy
+          make mrproper
diff --git a/Makefile b/Makefile
index c88637c2..8755577e 100644
--- a/Makefile
+++ b/Makefile
@@ -243,6 +243,7 @@ $(KDEVOPS_NODES): .config $(ANSIBLE_CFG_FILE) $(KDEVOPS_NODES_TEMPLATE)
 DEFAULT_DEPS += $(LOCALHOST_SETUP_WORK)
 
 include scripts/tests.Makefile
+include scripts/linux-ab-testing.Makefile
 include scripts/ci.Makefile
 include scripts/archive.Makefile
 include scripts/defconfig.Makefile
diff --git a/PROMPTS.md b/PROMPTS.md
index a4ecf39f..a92f96f8 100644
--- a/PROMPTS.md
+++ b/PROMPTS.md
@@ -123,3 +123,55 @@ source "workflows/mmtests/Kconfig.thpchallenge"
 source "workflows/mmtests/Kconfig.fs"
 
    This separation is preferred as it helps us scale.
+
+## Kernel development and A/B testing support
+
+### Adding A/B kernel testing support for different kernel versions
+
+**Prompt:**
+We want to add support for when users enable KDEVOPS_BASELINE_AND_DEV we want
+to extend workflows/linux/Kconfig with the a choise set of options to either a)
+use the same kernel ref or b) allow the user to specify a different ref tag.
+This will enable A/B testing with different kernel versions. When a different
+kernel refs are desirable we will want to extend the compilation step and
+installation of the Linux kernel in two steps. The first will be for the ref
+and target of A (baseline tag) and the second will be for the target ref of B
+(dev tag). However we want to fold these two steps in one for when
+KDEVOPS_BASELINE_AND_DEV is used and make install is used, it would happen
+transparently for us. The resulting linux kernel directory would end up with
+the "dev" ref at the end. In case a user wants to re-compile a target ref for
+baseline or dev we want to add (if we don't have already) a make linux-baseline
+and make linux-dev so that we can build and install the target ref tag on the
+baseline (A) or dev (B). The make linux target then would serially do make
+linux-baseline and make linux-dev. Extend documentation for all this and also
+add the respective prompt to PROMPTS.md once done. Avoid adding extra spaces to
+code or documentation at the end of each line. These end up in red color on
+diffs and hurt my eyes. Extend CLAUDE.md to understand styling for these rules
+about not wanting lines ending in white space for styling.
+
+**AI:** Claude Code
+**Commit:** [To be determined]
+**Result:** Complete A/B kernel testing implementation with comprehensive configuration options.
+**Grading:** 70%
+
+**Notes:**
+
+The implementation successfully added:
+
+1. **Makefile Implementation**: the AI failed to grasp the value of
+   output yaml, and made ugly Makefile changes to extract variables.
+
+2. **Ansible Integration**: The AI failed to write the required changes on
+   the ansible playbook at first. A secondary prompt made it just move the
+   definitions to the ansible playbook but failed to address serially compiling
+   linux for the baseline group first followed by the dev group after.
+
+3. **Documentation**: The AI is not grasping the preference to respect 80
+   character lengths.
+
+4. **Issues**: The AI failed to understand a really obscure lesson which even
+   humans have issues in understanding in ansible, you can't override a fact
+   and later use it specially if being used on multple hosts. The best thing
+   to do is to use a separate fact if you want a true dynamic variable. This
+   is why we switched to an active ref prefix for the baseline and dev group
+   ref tags.
diff --git a/defconfigs/linux-ab-testing b/defconfigs/linux-ab-testing
new file mode 100644
index 00000000..c752e6a9
--- /dev/null
+++ b/defconfigs/linux-ab-testing
@@ -0,0 +1,14 @@
+CONFIG_GUESTFS=y
+CONFIG_LIBVIRT=y
+
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOW_LINUX_CUSTOM=y
+
+CONFIG_BOOTLINUX=y
+CONFIG_BOOTLINUX_9P=y
+
+# Enable baseline and dev testing
+CONFIG_KDEVOPS_BASELINE_AND_DEV=y
+
+# Enable A/B testing with different kernel references
+CONFIG_BOOTLINUX_AB_DIFFERENT_REF=y
diff --git a/defconfigs/linux-ab-testing-9p b/defconfigs/linux-ab-testing-9p
new file mode 100644
index 00000000..35d589aa
--- /dev/null
+++ b/defconfigs/linux-ab-testing-9p
@@ -0,0 +1,15 @@
+CONFIG_GUESTFS=y
+CONFIG_LIBVIRT=y
+
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOW_LINUX_CUSTOM=y
+
+CONFIG_BOOTLINUX=y
+CONFIG_BOOTLINUX_9P=y
+# CONFIG_BOOTLINUX_BUILDER is not set
+
+# Enable baseline and dev testing
+CONFIG_KDEVOPS_BASELINE_AND_DEV=y
+
+# Enable A/B testing with different kernel references
+CONFIG_BOOTLINUX_AB_DIFFERENT_REF=y
diff --git a/defconfigs/linux-ab-testing-builder b/defconfigs/linux-ab-testing-builder
new file mode 100644
index 00000000..0b881709
--- /dev/null
+++ b/defconfigs/linux-ab-testing-builder
@@ -0,0 +1,15 @@
+CONFIG_GUESTFS=y
+CONFIG_LIBVIRT=y
+
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOW_LINUX_CUSTOM=y
+
+CONFIG_BOOTLINUX=y
+# CONFIG_BOOTLINUX_9P is not set
+CONFIG_BOOTLINUX_BUILDER=y
+
+# Enable baseline and dev testing
+CONFIG_KDEVOPS_BASELINE_AND_DEV=y
+
+# Enable A/B testing with different kernel references
+CONFIG_BOOTLINUX_AB_DIFFERENT_REF=y
diff --git a/defconfigs/linux-ab-testing-target b/defconfigs/linux-ab-testing-target
new file mode 100644
index 00000000..21c72b56
--- /dev/null
+++ b/defconfigs/linux-ab-testing-target
@@ -0,0 +1,15 @@
+CONFIG_GUESTFS=y
+CONFIG_LIBVIRT=y
+
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOW_LINUX_CUSTOM=y
+
+CONFIG_BOOTLINUX=y
+# CONFIG_BOOTLINUX_9P is not set
+# CONFIG_BOOTLINUX_BUILDER is not set
+
+# Enable baseline and dev testing
+CONFIG_KDEVOPS_BASELINE_AND_DEV=y
+
+# Enable A/B testing with different kernel references
+CONFIG_BOOTLINUX_AB_DIFFERENT_REF=y
diff --git a/docs/kdevops-make-linux.md b/docs/kdevops-make-linux.md
index e68eee5f..8f54372b 100644
--- a/docs/kdevops-make-linux.md
+++ b/docs/kdevops-make-linux.md
@@ -13,3 +13,161 @@ To verify the kernel on it:
 ```bash
 make uname
 ```
+
+## A/B Kernel Testing
+
+kdevops supports A/B testing with different kernel versions when
+`KDEVOPS_BASELINE_AND_DEV` is enabled. This allows you to compare performance
+or behavior between different kernel versions across baseline and development nodes.
+
+### Configuration Options
+
+When A/B testing is enabled, you can choose between two approaches:
+
+#### Same Kernel Reference (Default)
+Use the same kernel tree and reference for both baseline and dev nodes:
+```
+A/B kernel testing configuration (BOOTLINUX_AB_SAME_REF) [Y/n/?]
+```
+
+This is useful for testing configuration changes or different test parameters
+with identical kernels.
+
+#### Different Kernel References
+Use different kernel references for baseline and dev nodes:
+```
+A/B kernel testing configuration
+  1. Use same kernel reference for baseline and dev (BOOTLINUX_AB_SAME_REF)
+> 2. Use different kernel references for baseline and dev (BOOTLINUX_AB_DIFFERENT_REF)
+```
+
+This enables testing between different kernel versions, commits, or branches.
+
+When using different references, configure:
+- **Development kernel tree URL**: Git repository (defaults to baseline tree)
+- **Development kernel reference**: Branch, tag, or commit (e.g., "v6.8", "linux-next")
+- **Development kernel release/local version**: Custom version strings for identification
+
+### Make Targets
+
+#### Standard Linux Building
+```bash
+make linux                 # Build and install kernels for all nodes
+```
+
+When A/B testing with different references is enabled, this automatically:
+1. Builds and installs baseline kernel on baseline nodes
+2. Builds and installs development kernel on dev nodes
+3. Leaves the working directory with the dev kernel checked out
+
+#### Individual Node Targeting
+```bash
+make linux-baseline        # Build and install kernel for baseline nodes only
+make linux-dev             # Build and install kernel for dev nodes only
+```
+
+These targets are available when `KDEVOPS_BASELINE_AND_DEV=y` and allow
+selective building and installation.
+
+### Usage Examples
+
+#### Testing Kernel Versions
+Compare v6.7 (baseline) vs v6.8 (development):
+
+```bash
+# Configure baseline kernel
+menuconfig → Workflows → Linux kernel → Git tree to clone: linus
+            Reference to use: v6.7
+
+# Configure A/B testing
+menuconfig → Workflows → Linux kernel → A/B kernel testing
+            → Use different kernel references
+            → Development kernel reference: v6.8
+
+make bringup               # Provision baseline and dev nodes
+make linux                 # Install v6.7 on baseline, v6.8 on dev
+make fstests               # Run tests on both kernel versions
+make fstests-compare       # Compare results between versions
+```
+
+#### Testing Development Branches
+Compare stable vs linux-next:
+
+```bash
+# Baseline: stable kernel
+menuconfig → Reference to use: v6.8
+
+# Development: linux-next
+menuconfig → A/B kernel testing → Development kernel reference: linux-next
+
+make linux-baseline        # Install stable kernel on baseline nodes
+make linux-dev             # Install linux-next on dev nodes
+```
+
+#### Bisection Support
+Test specific commits during bisection:
+
+```bash
+# Update development reference for bisection
+menuconfig → Development kernel reference: abc123def
+
+make linux-dev             # Install bisection commit on dev nodes
+# Run tests and analyze results
+```
+
+### Working Directory State
+
+After running `make linux` with different references:
+- The Linux source directory contains the **development kernel** checkout
+- Both baseline and dev nodes have their respective kernels installed
+- Use `git log --oneline -5` to verify the current checkout
+
+To switch the working directory to baseline:
+```bash
+git checkout v6.7          # Switch to baseline reference
+```
+
+### Integration with Testing Workflows
+
+A/B kernel testing integrates seamlessly with all kdevops testing workflows:
+
+```bash
+# Run fstests with kernel comparison
+make linux                 # Install different kernels
+make fstests               # Test both kernel versions
+make fstests-compare       # Generate comparison analysis
+
+# Run fio-tests with kernel comparison
+make linux                 # Install different kernels
+make fio-tests             # Performance test both kernels
+make fio-tests-compare     # Compare performance metrics
+
+# Run sysbench with kernel comparison
+make linux                 # Install different kernels
+make sysbench              # Database tests on both kernels
+```
+
+### Best Practices
+
+1. **Version Identification**: Use descriptive kernel release versions to distinguish builds
+2. **Sequential Testing**: Install kernels before running test workflows
+3. **Result Organization**: Use baseline/dev labels in test result analysis
+4. **Git Management**: Keep track of which reference is currently checked out
+5. **Systematic Comparison**: Use `*-compare` targets for meaningful analysis
+
+### Troubleshooting
+
+#### Build Failures
+- Ensure both kernel references are valid and accessible
+- Check that build dependencies are installed on all nodes
+- Verify git repository permissions and network connectivity
+
+#### Version Conflicts
+- Use different `kernelrelease` and `localversion` settings for clear identification
+- Check `/boot` directory for kernel installation conflicts
+- Verify GRUB configuration after kernel installation
+
+#### Node Targeting Issues
+- Confirm `KDEVOPS_BASELINE_AND_DEV=y` is enabled
+- Verify baseline and dev node groups exist in inventory
+- Check ansible host patterns with `make linux-baseline HOSTS=baseline`
diff --git a/playbooks/roles/bootlinux/defaults/main.yml b/playbooks/roles/bootlinux/defaults/main.yml
index fc6bfec0..bbb85f00 100644
--- a/playbooks/roles/bootlinux/defaults/main.yml
+++ b/playbooks/roles/bootlinux/defaults/main.yml
@@ -52,3 +52,17 @@ bootlinux_tree_set_by_cli: False
 bootlinux_artifacts_dir: "{{ topdir_path }}/workflows/linux/artifacts"
 kernel_packages: []
 workflow_linux_packaged: false
+
+# A/B testing defaults
+bootlinux_ab_same_ref: True
+bootlinux_ab_different_ref: False
+
+# Development kernel settings (used when bootlinux_ab_different_ref is True)
+bootlinux_dev_tree: ""
+target_linux_dev_ref: "master"
+target_linux_dev_kernelrelease: ""
+target_linux_dev_localversion: ""
+bootlinux_tree_custom_kernelrelease: False
+bootlinux_tree_custom_localversion: false
+bootlinux_is_dev_node: False
+bootlinux_debug_ref: "{{ lookup('env', 'DEBUG_REF') | default(false, true) | bool }}"
diff --git a/playbooks/roles/bootlinux/tasks/build/9p.yml b/playbooks/roles/bootlinux/tasks/build/9p.yml
index bc2a66b6..1951e50e 100644
--- a/playbooks/roles/bootlinux/tasks/build/9p.yml
+++ b/playbooks/roles/bootlinux/tasks/build/9p.yml
@@ -50,7 +50,7 @@
     dest: "{{ bootlinux_9p_host_path }}"
     update: yes
     depth: "{{ target_linux_shallow_depth }}"
-    version: "{{ target_linux_ref }}"
+    version: "{{ active_linux_ref | default(target_linux_ref) }}"
   retries: 3
   delay: 5
   register: result
@@ -106,9 +106,9 @@
   delegate_to: localhost
 
 - name: Set kernel localversion if requested on the control node
-  shell: "echo {{ target_linux_localversion }} > {{ bootlinux_9p_host_path }}/localversion"
+  shell: "echo {{ active_linux_localversion | default(target_linux_localversion) }} > {{ bootlinux_9p_host_path }}/localversion"
   when:
-    - target_linux_localversion is defined and target_linux_localversion != ""
+    - (active_linux_localversion is defined and active_linux_localversion != "") or (target_linux_localversion is defined and target_linux_localversion != "")
   run_once: true
   delegate_to: localhost
 
@@ -139,16 +139,16 @@
   register: target_linux_kernelversion
   tags: [ 'build-linux' ]
   when:
-    - target_linux_kernelrelease | length > 0
+    - (active_linux_kernelrelease | default(target_linux_kernelrelease)) | length > 0
   run_once: true
   delegate_to: localhost
 
-- name: Generate user kernelrelease {{ target_linux_kernelversion.stdout }}-{{ target_linux_kernelrelease }}
+- name: Generate user kernelrelease {{ target_linux_kernelversion.stdout }}-{{ active_linux_kernelrelease | default(target_linux_kernelrelease) }}
   set_fact:
-    target_user_kernelrelease: "{{ target_linux_kernelversion.stdout }}-{{ target_linux_kernelrelease }}"
+    target_user_kernelrelease: "{{ target_linux_kernelversion.stdout }}-{{ active_linux_kernelrelease | default(target_linux_kernelrelease) }}"
   tags: [ 'build-linux' ]
   when:
-    - target_linux_kernelrelease | length > 0
+    - (active_linux_kernelrelease | default(target_linux_kernelrelease)) | length > 0
   run_once: true
   delegate_to: localhost
 
@@ -160,17 +160,17 @@
       KERNELRELEASE={{ target_user_kernelrelease }}
   tags: [ 'build-linux' ]
   when:
-    - target_linux_kernelrelease | length > 0
+    - (active_linux_kernelrelease | default(target_linux_kernelrelease)) | length > 0
   run_once: true
   delegate_to: localhost
 
-- name: Build {{ target_linux_tree }} {{ target_user_kernelrelease }} on the control node using {{ nproc_9p.stdout }} threads
+- name: Build {{ target_linux_tree }} on the control node using {{ nproc_9p.stdout }} threads
   make:
     jobs: "{{ nproc_9p.stdout }}"
     chdir: "{{ bootlinux_9p_host_path }}"
   tags: [ 'build-linux' ]
   when:
-    - target_linux_kernelrelease | length == 0
+    - (active_linux_kernelrelease | default(target_linux_kernelrelease)) | length == 0
   run_once: true
   delegate_to: localhost
 
diff --git a/playbooks/roles/bootlinux/tasks/main.yml b/playbooks/roles/bootlinux/tasks/main.yml
index acf77086..769bd100 100644
--- a/playbooks/roles/bootlinux/tasks/main.yml
+++ b/playbooks/roles/bootlinux/tasks/main.yml
@@ -59,6 +59,118 @@
     - not kdevops_baseline_and_dev|bool
     - not workflow_linux_packaged|bool
 
+- name: Determine if this is a dev node for A/B testing
+  set_fact:
+    bootlinux_is_dev_node: "{{ ansible_hostname | regex_search('^.*-dev$') is not none }}"
+  when:
+    - kdevops_baseline_and_dev|bool
+    - bootlinux_ab_different_ref|bool
+
+- name: Set development group full custom kernel release
+  set_fact:
+    target_linux_kernelrelease: "{{ target_linux_dev_kernelrelease if target_linux_dev_kernelrelease != '' else target_linux_kernelrelease }}"
+  when:
+    - kdevops_baseline_and_dev|bool
+    - bootlinux_ab_different_ref|bool
+    - bootlinux_tree_custom_kernelrelease|bool
+    - bootlinux_is_dev_node|bool
+
+- name: Set development group local append version
+  set_fact:
+    target_linux_localversion: "{{ target_linux_dev_localversion if target_linux_dev_localversion != '' else target_linux_localversion }}"
+  when:
+    - kdevops_baseline_and_dev|bool
+    - bootlinux_ab_different_ref|bool
+    - bootlinux_tree_custom_localversion|bool
+    - bootlinux_is_dev_node|bool
+
+- name: Set development kernel parameters for dev nodes
+  set_fact:
+    target_linux_git: "{{ bootlinux_dev_tree if bootlinux_dev_tree != '' else target_linux_git }}"
+    target_linux_ref: "{{ target_linux_dev_ref }}"
+    target_linux_config: "config-{{ target_linux_dev_ref }}"
+  when:
+    - kdevops_baseline_and_dev|bool
+    - bootlinux_ab_different_ref|bool
+    - bootlinux_is_dev_node|bool
+
+# A/B testing support for 9P builds
+# When using A/B testing with different kernel refs and 9P builds, we need to
+# determine which ref to use based on whether we're targeting dev or baseline nodes.
+# Since 9P builds run on localhost with run_once, we can't rely on per-node variables,
+# so we check the ansible_limit to determine which group is being targeted.
+- name: Determine if we're targeting dev nodes for A/B testing
+  set_fact:
+    targeting_dev_nodes: "{{ groups['dev'] is defined and groups['dev'] | length > 0 and (ansible_limit is not defined or 'dev' in ansible_limit) }}"
+  run_once: true
+  delegate_to: localhost
+  when:
+    - kdevops_baseline_and_dev|bool
+    - bootlinux_ab_different_ref|bool
+
+- name: Determine active kernel parameters for A/B testing with 9P
+  set_fact:
+    target_linux_git: "{{ bootlinux_dev_tree if bootlinux_dev_tree != '' else target_linux_git }}"
+    active_linux_ref: "{{ target_linux_dev_ref if targeting_dev_nodes|default(false)|bool else target_linux_ref }}"
+    active_linux_kernelrelease: "{{ target_linux_dev_kernelrelease if (targeting_dev_nodes|default(false)|bool and bootlinux_tree_custom_kernelrelease|bool) else target_linux_kernelrelease }}"
+    active_linux_localversion: "{{ target_linux_dev_localversion if (targeting_dev_nodes|default(false)|bool and bootlinux_tree_custom_localversion|bool) else target_linux_localversion }}"
+    target_linux_config: "config-{{ target_linux_dev_ref }}"
+  when:
+    - kdevops_baseline_and_dev|bool
+    - bootlinux_ab_different_ref|bool
+    - bootlinux_9p|bool
+  run_once: true
+  delegate_to: localhost
+
+- name: Debug kernel ref settings for 9P builds
+  delegate_to: localhost
+  block:
+    - name: Print kernel ref settings for 9P debug (localhost context)
+      debug:
+        msg:
+          - "=== 9P BUILD DEBUG (localhost context) ==="
+          - "bootlinux_9p: {{ bootlinux_9p }}"
+          - "target_linux_git: {{ target_linux_git }}"
+          - "active_linux_ref: {{ active_linux_ref | default('NOT SET') }}"
+          - "active_linux_kernelrelease: {{ active_linux_kernelrelease | default('NOT SET') }}"
+          - "active_linux_localversion: {{ active_linux_localversion | default('NOT SET') }}"
+          - "target_linux_config: {{ target_linux_config }}"
+          - "targeting_dev_nodes: {{ targeting_dev_nodes | default('NOT SET') }}"
+          - "ansible_limit: {{ ansible_limit | default('NOT SET') }}"
+          - "groups['dev']: {{ groups['dev'] | default([]) }}"
+          - "groups['baseline']: {{ groups['baseline'] | default([]) }}"
+
+    - name: End play gracefully for kernel ref debug
+      meta: end_play
+  when:
+    - bootlinux_debug_ref|bool
+    - bootlinux_9p|bool
+  run_once: true
+
+- name: Debug kernel ref settings for non-9P builds
+  block:
+    - name: Print kernel ref settings for non-9P debug (per-node context)
+      debug:
+        msg:
+          - "=== NON-9P BUILD DEBUG ({{ inventory_hostname }}) ==="
+          - "bootlinux_9p: {{ bootlinux_9p }}"
+          - "inventory_hostname: {{ inventory_hostname }}"
+          - "group_names: {{ group_names }}"
+          - "bootlinux_is_dev_node: {{ bootlinux_is_dev_node }}"
+          - "target_linux_git: {{ target_linux_git }}"
+          - "target_linux_ref: {{ target_linux_ref }}"
+          - "target_linux_kernelrelease: {{ target_linux_kernelrelease }}"
+          - "target_linux_localversion: {{ target_linux_localversion }}"
+          - "target_linux_dev_ref: {{ target_linux_dev_ref }}"
+          - "target_linux_dev_kernelrelease: {{ target_linux_dev_kernelrelease }}"
+          - "target_linux_dev_localversion: {{ target_linux_dev_localversion }}"
+
+    - name: End play gracefully for kernel ref debug
+      meta: end_play
+  when:
+    - bootlinux_debug_ref|bool
+    - not bootlinux_9p|bool
+
 - name: Create data partition
   ansible.builtin.include_role:
     name: create_data_partition
diff --git a/scripts/infer_last_stable_kernel.sh b/scripts/infer_last_stable_kernel.sh
new file mode 100755
index 00000000..8d97f882
--- /dev/null
+++ b/scripts/infer_last_stable_kernel.sh
@@ -0,0 +1,35 @@
+#!/bin/bash
+# SPDX-License-Identifier: copyleft-next-0.3.1
+
+# This script infers the last stable kernel version from the git repository.
+# It looks for the most recent non-rc tag (e.g., v6.14, v6.13) that would
+# be a good default for A/B testing with different kernel references.
+
+GIT_TREE="${1:-/mirror/linux.git}"
+
+if [ ! -d "$GIT_TREE" ]; then
+    echo "v6.12"  # fallback if no git tree available
+    exit 0
+fi
+
+# Get all v6.x tags, excluding release candidates
+# Sort them by version and get the last stable release
+LAST_STABLE=$(git --git-dir="$GIT_TREE" tag --list 'v6.*' | \
+    grep -v -- '-rc' | \
+    sort -V | \
+    tail -2 | head -1)
+
+if [ -z "$LAST_STABLE" ]; then
+    # If no stable v6.x found, try v5.x as fallback
+    LAST_STABLE=$(git --git-dir="$GIT_TREE" tag --list 'v5.*' | \
+        grep -v -- '-rc' | \
+        sort -V | \
+        tail -2 | head -1)
+fi
+
+# Final fallback if nothing found
+if [ -z "$LAST_STABLE" ]; then
+    echo "v6.12"
+else
+    echo "$LAST_STABLE"
+fi
diff --git a/scripts/linux-ab-testing.Makefile b/scripts/linux-ab-testing.Makefile
new file mode 100644
index 00000000..bf94d4ab
--- /dev/null
+++ b/scripts/linux-ab-testing.Makefile
@@ -0,0 +1,51 @@
+# SPDX-License-Identifier: copyleft-next-0.3.1
+#
+# Linux A/B testing verification for kdevops
+# Verifies that A/B testing ref configurations are correct
+
+# Test scripts
+LINUX_AB_TEST_SCRIPT_CONFIG :=	scripts/test-linux-ab-config.py
+LINUX_AB_TEST_SCRIPT :=		scripts/test-linux-ab.sh
+
+# Test verbosity
+LINUX_AB_TEST_VERBOSE ?= 0
+
+PHONY += check-linux-ab-help
+check-linux-ab-help:
+	@echo "Linux A/B testing verification:"
+	@echo "check-linux-ab            - Run full A/B testing verification (all build methods)"
+	@echo "check-linux-ab-config     - Quick check of current configuration only"
+	@echo ""
+	@echo "check-linux-ab runs the full test suite:"
+	@echo "  - Tests all three build methods (target, 9p, builder)"
+	@echo "  - Applies each defconfig and verifies settings"
+	@echo "  - Checks that refs are different"
+	@echo "  - Outputs results in TAP format"
+	@echo "  - Returns error code on any failure"
+	@echo ""
+	@echo "check-linux-ab-config only verifies current config:"
+	@echo "  - A/B testing is enabled in .config"
+	@echo "  - target_linux_ref and target_linux_dev_ref are different"
+	@echo "  - Both refs are valid (not empty or None)"
+	@echo ""
+
+# Main verification target - runs comprehensive tests
+PHONY += check-linux-ab
+check-linux-ab:
+	@if [ ! -f $(LINUX_AB_TEST_SCRIPT) ]; then \
+		echo "Error: Test script not found at $(LINUX_AB_TEST_SCRIPT)"; \
+		exit 1; \
+	fi
+	$(LINUX_AB_TEST_SCRIPT)
+
+# Quick verification - just checks current configuration
+PHONY += check-linux-ab-config
+check-linux-ab-config:
+	@if [ ! -f $(LINUX_AB_TEST_SCRIPT_CONFIG) ]; then \
+		echo "Error: Test script not found at $(LINUX_AB_TEST_SCRIPT_CONFIG)"; \
+		exit 1; \
+	fi
+	@python3 $(LINUX_AB_TEST_SCRIPT_CONFIG)
+
+# Add to help system
+HELP_TARGETS += check-linux-ab-help
diff --git a/scripts/test-linux-ab-config.py b/scripts/test-linux-ab-config.py
new file mode 100755
index 00000000..db6a831f
--- /dev/null
+++ b/scripts/test-linux-ab-config.py
@@ -0,0 +1,182 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: copyleft-next-0.3.1
+"""
+Linux A/B testing verification for kdevops
+Verifies that A/B testing ref configurations are set correctly
+"""
+
+import os
+import sys
+import re
+import subprocess
+from pathlib import Path
+
+
+class LinuxABTester:
+    """Test runner for Linux A/B testing configurations"""
+
+    def __init__(self):
+        self.failed_checks = []
+
+    def check_config_file(self):
+        """Check if .config exists and has A/B testing enabled"""
+        if not os.path.exists(".config"):
+            print(
+                "❌ No .config file found - run 'make menuconfig' or apply a defconfig first"
+            )
+            return False
+
+        with open(".config", "r") as f:
+            config = f.read()
+
+        # Check if A/B testing is enabled
+        if "CONFIG_KDEVOPS_BASELINE_AND_DEV=y" not in config:
+            print("❌ A/B testing not enabled (CONFIG_KDEVOPS_BASELINE_AND_DEV=y)")
+            return False
+
+        if "CONFIG_BOOTLINUX_AB_DIFFERENT_REF=y" not in config:
+            print("❌ Different refs not enabled (CONFIG_BOOTLINUX_AB_DIFFERENT_REF=y)")
+            return False
+
+        print("✓ A/B testing configuration enabled")
+        return True
+
+    def check_extra_vars(self):
+        """Check if extra_vars.yaml has been generated"""
+        if not os.path.exists("extra_vars.yaml"):
+            print("❌ No extra_vars.yaml found - run 'make' to generate it")
+            return False
+        return True
+
+    def verify_refs(self):
+        """Extract and verify kernel references are different"""
+        print("\nChecking kernel references...")
+
+        with open("extra_vars.yaml", "r") as f:
+            content = f.read()
+
+        # Extract refs
+        baseline_match = re.search(r"^target_linux_ref:\s*(.+)$", content, re.MULTILINE)
+        dev_match = re.search(r"^target_linux_dev_ref:\s*(.+)$", content, re.MULTILINE)
+
+        if not baseline_match:
+            print("❌ Could not find target_linux_ref in extra_vars.yaml")
+            self.failed_checks.append("missing-baseline-ref")
+            return False
+
+        if not dev_match:
+            print("❌ Could not find target_linux_dev_ref in extra_vars.yaml")
+            self.failed_checks.append("missing-dev-ref")
+            return False
+
+        baseline_ref = baseline_match.group(1).strip()
+        dev_ref = dev_match.group(1).strip()
+
+        print(f"  Baseline ref: {baseline_ref}")
+        print(f"  Dev ref: {dev_ref}")
+
+        if baseline_ref == dev_ref:
+            print("❌ ERROR: Baseline and dev refs are the same!")
+            print("  This defeats the purpose of A/B testing")
+            self.failed_checks.append("refs-identical")
+            return False
+
+        # Check if refs look valid
+        if not baseline_ref or baseline_ref == "None":
+            print("❌ Baseline ref is empty or None")
+            self.failed_checks.append("invalid-baseline-ref")
+            return False
+
+        if not dev_ref or dev_ref == "None":
+            print("❌ Dev ref is empty or None")
+            self.failed_checks.append("invalid-dev-ref")
+            return False
+
+        print("✓ Refs are different and valid")
+        return True
+
+    def check_makefile_structure(self):
+        """Verify the Makefile has proper A/B testing targets"""
+        print("\nChecking Makefile structure...")
+
+        makefile_path = "workflows/linux/Makefile"
+        if not os.path.exists(makefile_path):
+            print(f"⚠️  Cannot verify - {makefile_path} not found")
+            return True  # Don't fail if file doesn't exist
+
+        with open(makefile_path, "r") as f:
+            content = f.read()
+
+        # Check for A/B testing targets
+        has_baseline_target = bool(
+            re.search(r"^linux-baseline:", content, re.MULTILINE)
+        )
+        has_dev_target = bool(re.search(r"^linux-dev:", content, re.MULTILINE))
+
+        if not has_baseline_target:
+            print("⚠️  Missing linux-baseline target in Makefile")
+
+        if not has_dev_target:
+            print("⚠️  Missing linux-dev target in Makefile")
+
+        if has_baseline_target and has_dev_target:
+            print("✓ Makefile has A/B testing targets")
+
+        return True
+
+    def run_checks(self):
+        """Run all verification checks"""
+        print("Linux A/B Testing Reference Verification")
+        print("=" * 50)
+
+        # Check .config
+        if not self.check_config_file():
+            return False
+
+        # Check extra_vars.yaml
+        if not self.check_extra_vars():
+            return False
+
+        # Verify refs are different
+        if not self.verify_refs():
+            return False
+
+        # Check Makefile (informational only)
+        self.check_makefile_structure()
+
+        print("\n" + "=" * 50)
+        if self.failed_checks:
+            print(f"❌ Verification failed: {', '.join(self.failed_checks)}")
+            return False
+        else:
+            print("✅ A/B testing refs verified successfully!")
+            return True
+
+
+def main():
+    """Main entry point"""
+    import argparse
+
+    parser = argparse.ArgumentParser(
+        description="Verify Linux A/B testing ref configurations",
+        epilog="This tool only checks configurations, it does not run any builds or tests.",
+    )
+    parser.add_argument(
+        "-v", "--verbose", action="store_true", help="Enable verbose output"
+    )
+
+    args = parser.parse_args()
+
+    # Quick check for current directory
+    if not os.path.exists("Kconfig"):
+        print("❌ Error: Must be run from kdevops root directory")
+        sys.exit(1)
+
+    tester = LinuxABTester()
+    success = tester.run_checks()
+
+    sys.exit(0 if success else 1)
+
+
+if __name__ == "__main__":
+    main()
diff --git a/scripts/test-linux-ab.sh b/scripts/test-linux-ab.sh
new file mode 100755
index 00000000..a13964e4
--- /dev/null
+++ b/scripts/test-linux-ab.sh
@@ -0,0 +1,213 @@
+#!/bin/bash
+# SPDX-License-Identifier: copyleft-next-0.3.1
+#
+# Test A/B configuration locally of all Linux AB configurations posisble.
+# The goal is to verify your extra_vars.yaml ends up with different kernel
+# target refs for A and B group hosts. It does so also by checking that
+# ansible will use these. No real bringup or live test is done.
+#
+# Outputs TAP (Test Anything Protocol) format results
+
+set -e
+set -o pipefail
+
+# Colors for output (disabled if not a terminal or if NO_COLOR is set)
+if [ -t 1 ] && [ -z "${NO_COLOR}" ] && [ "${TERM}" != "dumb" ]; then
+    RED='\033[0;31m'
+    GREEN='\033[0;32m'
+    YELLOW='\033[1;33m'
+    NC='\033[0m' # No Color
+else
+    RED=''
+    GREEN=''
+    YELLOW=''
+    NC=''
+fi
+
+# TAP counters
+TOTAL_TESTS=0
+PASSED_TESTS=0
+FAILED_TESTS=0
+declare -a FAILED_DETAILS
+
+# Function to output TAP result
+tap_result() {
+    local result=$1
+    local test_name=$2
+    local details=$3
+
+    TOTAL_TESTS=$((TOTAL_TESTS + 1))
+
+    if [ "$result" = "ok" ]; then
+        PASSED_TESTS=$((PASSED_TESTS + 1))
+        echo "ok $TOTAL_TESTS - $test_name"
+    else
+        FAILED_TESTS=$((FAILED_TESTS + 1))
+        echo "not ok $TOTAL_TESTS - $test_name"
+        if [ -n "$details" ]; then
+            echo "  ---"
+            echo "  message: $details"
+            echo "  ..."
+            FAILED_DETAILS+=("$test_name: $details")
+        fi
+    fi
+}
+
+# Function to check condition and output TAP
+check_condition() {
+    local condition=$1
+    local test_name=$2
+    local error_msg=${3:-"Condition failed"}
+
+    if eval "$condition"; then
+        tap_result "ok" "$test_name"
+        return 0
+    else
+        tap_result "not ok" "$test_name" "$error_msg"
+        return 1
+    fi
+}
+
+echo "# Testing Linux A/B configuration locally"
+echo "# This script tests the configuration without requiring Docker"
+echo "TAP version 13"
+echo ""
+
+# Save current state
+if [ -f .config ]; then
+    cp .config .config.backup.$$
+    tap_result "ok" "Backup current .config to .config.backup.$$"
+else
+    tap_result "ok" "No existing .config to backup"
+fi
+
+# Function to restore state
+restore_state() {
+    if [ -f .config.backup.$$ ]; then
+        mv .config.backup.$$ .config >/dev/null 2>&1
+        echo "# Restored original .config"
+    fi
+}
+
+# Set trap to restore on exit
+trap restore_state EXIT
+
+# Test each build method
+BUILD_METHODS="target 9p builder"
+echo "1..18" # We expect 18 tests total (6 per build method x 3 methods)
+
+for method in $BUILD_METHODS; do
+    echo ""
+    echo "# Testing $method build method"
+
+    # Clean and apply defconfig
+    if make mrproper >/dev/null 2>&1; then
+        tap_result "ok" "$method: Clean environment (make mrproper)"
+    else
+        tap_result "not ok" "$method: Clean environment (make mrproper)" "Failed to run make mrproper"
+    fi
+
+    # Apply defconfig
+    if make defconfig-linux-ab-testing-$method >/dev/null 2>&1; then
+        tap_result "ok" "$method: Apply defconfig-linux-ab-testing-$method"
+    else
+        tap_result "not ok" "$method: Apply defconfig-linux-ab-testing-$method" "Failed to apply defconfig"
+        continue
+    fi
+
+    # Generate configuration
+    if make >/dev/null 2>&1; then
+        tap_result "ok" "$method: Generate configuration (make)"
+    else
+        tap_result "not ok" "$method: Generate configuration (make)" "Failed to run make"
+        continue
+    fi
+
+    # Verify A/B testing is enabled
+    check_condition "grep -q 'CONFIG_KDEVOPS_BASELINE_AND_DEV=y' .config" \
+        "$method: A/B testing enabled (CONFIG_KDEVOPS_BASELINE_AND_DEV=y)" \
+        "A/B testing not enabled in .config"
+
+    # Verify different refs enabled
+    check_condition "grep -q 'CONFIG_BOOTLINUX_AB_DIFFERENT_REF=y' .config" \
+        "$method: Different refs enabled (CONFIG_BOOTLINUX_AB_DIFFERENT_REF=y)" \
+        "Different refs not enabled in .config"
+
+    # Verify build method specific config
+    case "$method" in
+        target)
+            check_condition "grep -q 'CONFIG_BOOTLINUX_TARGETS=y' .config" \
+                "$method: Target build enabled (CONFIG_BOOTLINUX_TARGETS=y)" \
+                "Target build not enabled"
+            ;;
+        9p)
+            check_condition "grep -q 'CONFIG_BOOTLINUX_9P=y' .config" \
+                "$method: 9P build enabled (CONFIG_BOOTLINUX_9P=y)" \
+                "9P build not enabled"
+            ;;
+        builder)
+            check_condition "grep -q 'CONFIG_BOOTLINUX_BUILDER=y' .config" \
+                "$method: Builder build enabled (CONFIG_BOOTLINUX_BUILDER=y)" \
+                "Builder build not enabled"
+            ;;
+    esac
+done
+
+# Additional tests for ref extraction
+echo ""
+echo "# Testing ref extraction from final configuration"
+
+# Check if extra_vars.yaml was generated
+if [ -f extra_vars.yaml ]; then
+    tap_result "ok" "extra_vars.yaml exists"
+
+    # Extract refs with new variable names
+    BASELINE_REF=$(grep "^target_linux_ref:" extra_vars.yaml 2>/dev/null | awk '{print $2}')
+    DEV_REF=$(grep "^target_linux_dev_ref:" extra_vars.yaml 2>/dev/null | awk '{print $2}')
+
+    # Check baseline ref
+    if [ -n "$BASELINE_REF" ]; then
+        tap_result "ok" "Baseline ref found: $BASELINE_REF"
+    else
+        tap_result "not ok" "Baseline ref not found" "Could not find target_linux_ref in extra_vars.yaml"
+    fi
+
+    # Check dev ref
+    if [ -n "$DEV_REF" ]; then
+        tap_result "ok" "Dev ref found: $DEV_REF"
+    else
+        tap_result "not ok" "Dev ref not found" "Could not find target_linux_dev_ref in extra_vars.yaml"
+    fi
+
+    # Check refs are different
+    if [ -n "$BASELINE_REF" ] && [ -n "$DEV_REF" ] && [ "$BASELINE_REF" != "$DEV_REF" ]; then
+        tap_result "ok" "Refs are different (baseline: $BASELINE_REF, dev: $DEV_REF)"
+    else
+        tap_result "not ok" "Refs are not different" "Baseline and dev refs should be different for A/B testing"
+    fi
+else
+    tap_result "not ok" "extra_vars.yaml exists" "File not found"
+fi
+
+# Summary
+echo ""
+echo "# Test Summary"
+echo "# ============"
+echo "# Total tests: $TOTAL_TESTS"
+printf "# Passed: ${GREEN}%d${NC}\n" "$PASSED_TESTS"
+printf "# Failed: ${RED}%d${NC}\n" "$FAILED_TESTS"
+
+if [ $FAILED_TESTS -gt 0 ]; then
+    echo ""
+    printf "${RED}# Failed tests:${NC}\n"
+    for failure in "${FAILED_DETAILS[@]}"; do
+        printf "${RED}#   - %s${NC}\n" "$failure"
+    done
+    echo ""
+    printf "${RED}# FAIL: A/B testing verification failed${NC}\n"
+    exit 1
+else
+    echo ""
+    printf "${GREEN}# PASS: All A/B testing verifications passed!${NC}\n"
+    exit 0
+fi
diff --git a/workflows/linux/Kconfig b/workflows/linux/Kconfig
index 06742f3e..44456904 100644
--- a/workflows/linux/Kconfig
+++ b/workflows/linux/Kconfig
@@ -324,23 +324,39 @@ config BOOTLINUX_TREE_REF
 	default BOOTLINUX_TREE_CEL_LINUX_REF if BOOTLINUX_TREE_CEL_LINUX
 	default BOOTLINUX_TREE_CUSTOM_REF if BOOTLINUX_CUSTOM
 
+config BOOTLINUX_TREE_CUSTOM_KERNELRELEASE
+	bool "Do you want a full custom kernel release name?"
+	output yaml
+	help
+	  Do you want a full custom Linux kernel release which will be output
+	  through uname?
+
 config BOOTLINUX_TREE_KERNELRELEASE
 	string "Linux kernel release version to use"
+	depends on BOOTLINUX_TREE_CUSTOM_KERNELRELEASE
 	help
 	  The Linux kernel release version to use (for uname).
 
 	  The string here (e.g. 'devel') will be appended to the result of make
 	  kernelversion. Example: '6.8.0-rc3-devel'
 
+config BOOTLINUX_TREE_CUSTOM_LOCALVERSION
+	bool "Do you want to append a custom kernel release tag?"
+	output yaml
+	help
+	  Do you want a full custom Linux kernel release which will be output
+	  through uname?
 
 config BOOTLINUX_TREE_LOCALVERSION
 	string "Linux local version to use"
+	depends on BOOTLINUX_TREE_CUSTOM_LOCALVERSION
 	help
 	  The Linux local version to use (for uname).
 
 config BOOTLINUX_SHALLOW_CLONE
 	bool "Shallow git clone"
-	default y
+	default y if !KDEVOPS_BASELINE_AND_DEV
+	depends on !BOOTLINUX_AB_DIFFERENT_REF
 	help
 	  If enabled the git tree cloned with be cloned using a shallow tree
 	  with history truncated. You want to enable this if you really don't
@@ -351,6 +367,10 @@ config BOOTLINUX_SHALLOW_CLONE
 	  just using the targets as dummy target runners and don't expect to
 	  be using 'git log' on the target guests.
 
+	  This option is automatically disabled when using A/B testing with
+	  different kernel references, as shallow clones may not contain all
+	  the required refs for checkout.
+
 config BOOTLINUX_SHALLOW_CLONE_DEPTH
 	int "Shallow git clone depth"
 	default 30 if BOOTLINUX_TREE_SET_BY_CLI
@@ -361,4 +381,84 @@ config BOOTLINUX_SHALLOW_CLONE_DEPTH
 	  number or revisions. The minimum possible value is 1, otherwise
 	  ignored. Needs git>=1.9.1 to work correctly.
 
+if KDEVOPS_BASELINE_AND_DEV
+
+choice
+	prompt "A/B kernel testing configuration"
+	default BOOTLINUX_AB_SAME_REF
+	help
+	  When A/B testing is enabled, you can choose to use the same
+	  kernel reference for both baseline and dev nodes, or specify
+	  different kernel references to test different kernel versions.
+
+config BOOTLINUX_AB_SAME_REF
+	bool "Use same kernel reference for baseline and dev"
+	output yaml
+	help
+	  Use the same kernel tree and reference for both baseline and
+	  development nodes. This is useful for testing configuration
+	  changes or different test parameters with the same kernel.
+
+config BOOTLINUX_AB_DIFFERENT_REF
+	bool "Use different kernel references for baseline and dev"
+	output yaml
+	help
+	  Use different kernel references for baseline and development
+	  nodes. This enables testing between different kernel versions,
+	  commits, or branches. The baseline will use the main configured
+	  kernel reference, while dev uses a separate reference.
+
+endchoice
+
+if BOOTLINUX_AB_DIFFERENT_REF
+
+config BOOTLINUX_DEV_TREE
+	string "Development kernel tree URL"
+	output yaml
+	default BOOTLINUX_TREE
+	help
+	  Git tree URL for the development kernel. If left empty or same
+	  as the baseline tree, the same tree will be used with a different
+	  reference. This allows testing different branches or forks.
+
+config TARGET_LINUX_DEV_REF
+	string "Development kernel reference"
+	output yaml
+	default $(shell, scripts/infer_last_stable_kernel.sh)
+	help
+	  Git reference (branch, tag, or commit) for the development kernel.
+	  This should be different from the baseline reference to enable
+	  meaningful A/B comparison between kernel versions.
+
+	  The default is automatically inferred as the most recent stable
+	  kernel version (e.g., v6.15) from the git repository.
+
+	  Examples:
+	  - "v6.8" (stable release)
+	  - "linux-next" (latest development)
+	  - "v6.7..v6.8" (range for bisection)
+	  - commit SHA (specific commit)
+
+config TARGET_LINUX_DEV_KERNELRELEASE
+	string "Development kernel release version"
+	depends on BOOTLINUX_TREE_CUSTOM_KERNELRELEASE
+	output yaml
+	help
+	  The string here (e.g. 'devel') will be appended to the result of make
+	  kernelversion. Example: '6.8.0-rc3-devel' but only for the dev group.
+	  Leave it empty unless you want a custom tag at the end.
+
+config TARGET_LINUX_DEV_LOCALVERSION
+	string "Development kernel local version"
+	output yaml
+	depends on BOOTLINUX_TREE_CUSTOM_LOCALVERSION
+	default BOOTLINUX_TREE_LOCALVERSION
+	help
+	  The Linux local version to use for the development kernel (for uname).
+	  If left empty, will use the same as baseline.
+
+endif # BOOTLINUX_AB_DIFFERENT_REF
+
+endif # KDEVOPS_BASELINE_AND_DEV
+
 endif # BOOTLINUX
diff --git a/workflows/linux/Makefile b/workflows/linux/Makefile
index bbc2c3d4..30b123f9 100644
--- a/workflows/linux/Makefile
+++ b/workflows/linux/Makefile
@@ -74,6 +74,10 @@ PHONY +=  linux-help-menu
 linux-help-menu:
 	@echo "Linux git kernel development options"
 	@echo "linux              - Git clones a linux git tree, build Linux, installs and reboots into it"
+	@if [[ "$(CONFIG_KDEVOPS_BASELINE_AND_DEV)" == "y" ]]; then \
+		echo "linux-baseline     - Build and install kernel for baseline nodes only" ;\
+		echo "linux-dev          - Build and install kernel for dev nodes only" ;\
+	fi
 	@if [[ "$(CONFIG_BOOTLINUX_9P)" == "y" ]]; then \
 		echo "linux-mount        - Mounts 9p path on targets" ;\
 	fi
@@ -93,11 +97,46 @@ linux-help-end:
 LINUX_HELP_EXTRA :=
 
 PHONY += linux
+ifeq (y,$(CONFIG_KDEVOPS_BASELINE_AND_DEV))
+ifeq (y,$(CONFIG_BOOTLINUX_AB_DIFFERENT_REF))
+linux: linux-baseline linux-dev
+else
+linux: $(KDEVOPS_NODES)
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) -i \
+		hosts $(KDEVOPS_PLAYBOOKS_DIR)/bootlinux.yml \
+		--extra-vars="$(BOOTLINUX_ARGS)" $(LIMIT_HOSTS)
+endif
+else
 linux: $(KDEVOPS_NODES)
 	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
 		--limit 'baseline:dev' \
 		$(KDEVOPS_PLAYBOOKS_DIR)/bootlinux.yml \
 		--extra-vars="$(BOOTLINUX_ARGS)" $(LIMIT_HOSTS)
+endif
+
+PHONY += linux-baseline
+ifeq (y,$(CONFIG_KDEVOPS_BASELINE_AND_DEV))
+linux-baseline: $(KDEVOPS_NODES)
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) -i \
+		hosts $(KDEVOPS_PLAYBOOKS_DIR)/bootlinux.yml \
+		--extra-vars="$(BOOTLINUX_ARGS)" --limit baseline
+else
+linux-baseline:
+	@echo "linux-baseline requires KDEVOPS_BASELINE_AND_DEV=y"
+	@exit 1
+endif
+
+PHONY += linux-dev
+ifeq (y,$(CONFIG_KDEVOPS_BASELINE_AND_DEV))
+linux-dev: $(KDEVOPS_NODES)
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) -i \
+		hosts $(KDEVOPS_PLAYBOOKS_DIR)/bootlinux.yml \
+		--extra-vars="$(BOOTLINUX_ARGS)" --limit dev
+else
+linux-dev:
+	@echo "linux-dev requires KDEVOPS_BASELINE_AND_DEV=y"
+	@exit 1
+endif
 
 PHONY += linux-mount
 linux-mount:
-- 
2.47.2


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/9] Makefile: suppress Ansible warnings during configuration generation
  2025-07-30  6:01 ` [PATCH v2 2/9] Makefile: suppress Ansible warnings during configuration generation Luis Chamberlain
@ 2025-07-30  6:22   ` Daniel Gomez
  0 siblings, 0 replies; 19+ messages in thread
From: Daniel Gomez @ 2025-07-30  6:22 UTC (permalink / raw)
  To: Luis Chamberlain, Chuck Lever, Daniel Gomez, kdevops

On 30/07/2025 08.01, Luis Chamberlain wrote:
> The initial configuration generation playbooks (ansible_cfg.yml,
> gen_hosts.yml, and gen_nodes.yml) run without a proper inventory file
> by design, as they are responsible for creating the configuration files
> that will be used by subsequent playbooks.
> 
> This causes Ansible to emit warnings about:
> - "No inventory was parsed, only implicit localhost is available"
> - "provided hosts list is empty, only localhost is available"
> 
> These warnings are harmless but create noise in the build output,
> potentially confusing users who might think something is wrong.

Agree.

> 
> Add ANSIBLE_LOCALHOST_WARNING=False and ANSIBLE_INVENTORY_UNPARSED_WARNING=False
> environment variables to these three ansible-playbook invocations to
> suppress these specific warnings. This makes the build output cleaner
> while maintaining the same functionality.
> 
> The warnings were appearing because:
> 1. ansible_cfg.yml creates the ansible.cfg file
> 2. gen_hosts.yml creates the hosts inventory file
> 3. gen_nodes.yml creates the kdevops_nodes.yaml file
> 
> All three intentionally run with "--connection=local" and minimal
> inventory since they're bootstrapping the configuration that other
> playbooks will use.
> 
> Generated-by: Claude AI
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
>  Makefile | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/Makefile b/Makefile
> index 37c2522b..31f544e9 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -195,7 +195,8 @@ include scripts/gen-nodes.Makefile
>  	false)
>  
>  $(ANSIBLE_CFG_FILE): .config
> -	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) --connection=local \
> +	$(Q)ANSIBLE_LOCALHOST_WARNING=False ANSIBLE_INVENTORY_UNPARSED_WARNING=False \
> +		ansible-playbook $(ANSIBLE_VERBOSE) --connection=local \
>  		--inventory localhost, \
>  		$(KDEVOPS_PLAYBOOKS_DIR)/ansible_cfg.yml \
>  		--extra-vars=@./.extra_vars_auto.yaml
> @@ -226,13 +227,15 @@ endif
>  
>  DEFAULT_DEPS += $(ANSIBLE_INVENTORY_FILE)
>  $(ANSIBLE_INVENTORY_FILE): .config $(ANSIBLE_CFG_FILE) $(KDEVOPS_HOSTS_TEMPLATE)
> -	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
> +	$(Q)ANSIBLE_LOCALHOST_WARNING=False ANSIBLE_INVENTORY_UNPARSED_WARNING=False \
> +		ansible-playbook $(ANSIBLE_VERBOSE) \
>  		$(KDEVOPS_PLAYBOOKS_DIR)/gen_hosts.yml \
>  		--extra-vars=@./extra_vars.yaml
>  
>  DEFAULT_DEPS += $(KDEVOPS_NODES)
>  $(KDEVOPS_NODES): .config $(ANSIBLE_CFG_FILE) $(KDEVOPS_NODES_TEMPLATE)
> -	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) --connection=local \
> +	$(Q)ANSIBLE_LOCALHOST_WARNING=False ANSIBLE_INVENTORY_UNPARSED_WARNING=False \

I think at this point we have the inventory file with all guests/hosts
and the ansible.cfg. If that is correct I think we may not need to disable
ANSIBLE_INVENTORY_UNPARSED_WARNING and ANSIBLE_LOCALHOST_WARNING here. And 
we can probably remove --connection=local and --inventory localhost, arguments.

> +		ansible-playbook $(ANSIBLE_VERBOSE) --connection=local \
>  		--inventory localhost, \
>  		$(KDEVOPS_PLAYBOOKS_DIR)/gen_nodes.yml \
>  		--extra-vars=@./extra_vars.yaml

But in general it makes sense to enable these variables only for these specific
ansible-playbook invocations. I initially thought we could inject them into
the ansible.cfg and wrap ansible-playbook with the new environment, allowing
subsequent invocations to leverage that. But we don't want the rest of the
ansible-playbook runs to suppress these warnings, in case they do exist, as that
would hide errors.

Thanks!

Reviewed-by: Daniel Gomez <da.gomez@samsung.com>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 8/9] devconfig: add automatic APT mirror fallback for Debian testing
  2025-07-30  6:01 ` [PATCH v2 8/9] devconfig: add automatic APT mirror fallback for Debian testing Luis Chamberlain
@ 2025-07-30  6:41   ` Daniel Gomez
  2025-08-01 17:39     ` Luis Chamberlain
  0 siblings, 1 reply; 19+ messages in thread
From: Daniel Gomez @ 2025-07-30  6:41 UTC (permalink / raw)
  To: Luis Chamberlain, Chuck Lever, Daniel Gomez, kdevops

On 30/07/2025 08.01, Luis Chamberlain wrote:
> Debian testing (trixie) VMs can fail to provision when configured APT
> mirrors become unavailable or unresponsive. This is particularly common
> with local or regional mirrors that may have intermittent connectivity
> issues.
> 
> This fix adds automatic mirror health checking specifically for Debian
> testing systems. The implementation:
> 
> 1. Extracts the currently configured APT mirror hostname
> 2. Tests connectivity to the mirror on port 80 with a 10 second timeout
> 3. Falls back to official Debian mirrors if the test fails
> 4. Backs up the original sources.list before making changes
> 5. Updates the APT cache after switching mirrors
> 6. Provides clear user notification about the fallback
> 
> The check only runs on Debian testing systems where devconfig_debian_testing
> is set to true, avoiding any impact on stable Debian or other distributions.
> 
> This ensures that Debian testing VMs can successfully provision even when
> the initially configured mirror is unavailable, improving reliability for
> development and testing workflows.
> 
> Generated-by: Claude AI
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
>  .../devconfig/tasks/check-apt-mirrors.yml     | 63 +++++++++++++++++++
>  playbooks/roles/devconfig/tasks/main.yml      |  8 +++
>  .../debian-testing-fallback-sources.list      | 10 +++
>  3 files changed, 81 insertions(+)
>  create mode 100644 playbooks/roles/devconfig/tasks/check-apt-mirrors.yml
>  create mode 100644 playbooks/roles/devconfig/templates/debian-testing-fallback-sources.list
> 
> diff --git a/playbooks/roles/devconfig/tasks/check-apt-mirrors.yml b/playbooks/roles/devconfig/tasks/check-apt-mirrors.yml
> new file mode 100644
> index 00000000..02e0c800
> --- /dev/null
> +++ b/playbooks/roles/devconfig/tasks/check-apt-mirrors.yml
> @@ -0,0 +1,63 @@
> +---
> +# Only run mirror checks for Debian testing (trixie) where mirror issues are common
> +- name: Extract current APT mirror hostname
> +  shell: |
> +    grep -E "^deb\s+http" /etc/apt/sources.list | head -1 | awk '{print $2}' | sed 's|http://||' | cut -d'/' -f1
> +  register: apt_mirror_host
> +  changed_when: false
> +  ignore_errors: yes
> +
> +- name: Check connectivity to current APT mirror
> +  wait_for:
> +    host: "{{ apt_mirror_host.stdout }}"
> +    port: 80
> +    timeout: 10
> +  register: mirror_connectivity
> +  ignore_errors: yes
> +  when: apt_mirror_host.stdout != ""
> +
> +- name: Display mirror check results
> +  debug:
> +    msg: |
> +      Current APT mirror: {{ apt_mirror_host.stdout | default('Not found') }}
> +      Mirror connectivity: {{ 'OK' if mirror_connectivity is not failed else 'FAILED' }}
> +  when: apt_mirror_host.stdout != ""
> +
> +- name: Fall back to official Debian mirrors if current mirror fails
> +  block:
> +    - name: Backup current sources.list
> +      copy:
> +        src: /etc/apt/sources.list
> +        dest: /etc/apt/sources.list.backup
> +        remote_src: yes
> +      become: yes
> +
> +    - name: Apply Debian testing fallback sources
> +      template:
> +        src: debian-testing-fallback-sources.list
> +        dest: /etc/apt/sources.list
> +        owner: root
> +        group: root
> +        mode: '0644'
> +      become: yes
> +
> +    - name: Update APT cache after mirror change
> +      apt:
> +        update_cache: yes
> +        cache_valid_time: 0
> +      become: yes
> +
> +    - name: Inform user about mirror fallback
> +      debug:
> +        msg: |
> +          WARNING: The configured APT mirror '{{ apt_mirror_host.stdout }}' is not accessible.
> +          Falling back to official Debian testing mirrors:
> +          - deb.debian.org for main packages
> +          - security.debian.org for security updates
> +
> +          This may result in slower package downloads depending on your location.
> +          Consider configuring a local mirror for better performance.
> +
> +  when:
> +    - apt_mirror_host.stdout != ""
> +    - mirror_connectivity is failed
> diff --git a/playbooks/roles/devconfig/tasks/main.yml b/playbooks/roles/devconfig/tasks/main.yml
> index 656d5389..ceb0f2e8 100644
> --- a/playbooks/roles/devconfig/tasks/main.yml
> +++ b/playbooks/roles/devconfig/tasks/main.yml
> @@ -30,6 +30,14 @@
>    tags: hostname
>  
>  # Distro specific
> +
> +# Check and fix APT mirrors for Debian testing before installing dependencies
> +- name: Check and fix APT mirrors for Debian testing
> +  include_tasks: check-apt-mirrors.yml
> +  when:
> +    - devconfig_debian_testing is defined
> +    - devconfig_debian_testing | bool
> +
>  - name: Install dependencies
>    ansible.builtin.include_tasks: install-deps/main.yml
>    tags: ['vars', 'vars_simple']
> diff --git a/playbooks/roles/devconfig/templates/debian-testing-fallback-sources.list b/playbooks/roles/devconfig/templates/debian-testing-fallback-sources.list
> new file mode 100644
> index 00000000..456ed60f
> --- /dev/null
> +++ b/playbooks/roles/devconfig/templates/debian-testing-fallback-sources.list
> @@ -0,0 +1,10 @@
> +deb http://deb.debian.org/debian testing main contrib non-free non-free-firmware
> +deb-src http://deb.debian.org/debian testing main contrib non-free non-free-firmware
> +
> +# Security updates
> +deb http://security.debian.org/debian-security testing-security main contrib non-free non-free-firmware
> +deb-src http://security.debian.org/debian-security testing-security main contrib non-free non-free-firmware
> +
> +# Updates (if available for testing)
> +deb http://deb.debian.org/debian testing-updates main contrib non-free non-free-firmware
> +deb-src http://deb.debian.org/debian testing-updates main contrib non-free non-free-firmware
> \ No newline at end of file

Debian has switched to a new sources format [1]. They have also provided a
manual conversion for users with "apt modernize-sources". So it makes sense to
start using the new format directly here.

[1] https://wiki.debian.org/SourcesList

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/9] roles/guestfs: add missing bootlinux_9p: False
  2025-07-30  6:01 ` [PATCH v2 1/9] roles/guestfs: add missing bootlinux_9p: False Luis Chamberlain
@ 2025-07-30 14:17   ` Chuck Lever
  0 siblings, 0 replies; 19+ messages in thread
From: Chuck Lever @ 2025-07-30 14:17 UTC (permalink / raw)
  To: Luis Chamberlain, Daniel Gomez, kdevops

On 7/30/25 2:01 AM, Luis Chamberlain wrote:
> Commit 722d83a4871c41 ("bootlinux: Move 9p build tasks to a subrole")
> added a proactive directory creation as we no longer git clone right
> away, but forgot to ensure the variable is defined by default. When
> we don't enable building linux this variable is not defined. Fix this.
> 
> Fixes: 722d83a4871c41 ("bootlinux: Move 9p build tasks to a subrole")
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
>  playbooks/roles/guestfs/defaults/main.yml | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/playbooks/roles/guestfs/defaults/main.yml b/playbooks/roles/guestfs/defaults/main.yml
> index eec137bd..76854d06 100644
> --- a/playbooks/roles/guestfs/defaults/main.yml
> +++ b/playbooks/roles/guestfs/defaults/main.yml
> @@ -3,3 +3,4 @@ distro_debian_based: false
>  
>  libvirt_uri_system: false
>  libvirt_enable_largeio: false
> +bootlinux_9p: False

For "Move 9p build tasks to a subrole" IIRC all Kconfig configurations
allowed "output yaml" to set bootlinux_9p, so a default was unnecessary.

Some later patch made it possible for "make menuconfig" to complete
without encountering that "output yaml" so that must be what leaves the
variable unset.


Reviewed-by: Chuck Lever <chuck.lever@oracle.com>


-- 
Chuck Lever

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 7/9] all: run black
  2025-07-30  6:01 ` [PATCH v2 7/9] all: run black Luis Chamberlain
@ 2025-07-31 12:57   ` Daniel Gomez
  2025-08-01  8:12     ` Daniel Gomez
  0 siblings, 1 reply; 19+ messages in thread
From: Daniel Gomez @ 2025-07-31 12:57 UTC (permalink / raw)
  To: Luis Chamberlain, Chuck Lever, Daniel Gomez, kdevops

On 30/07/2025 08.01, Luis Chamberlain wrote:
> Run black to fix tons of styling issues with tons of Python scripts.
> In order to help bots ensure they don't add odd styling we need a
> convention.
> 
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>

Acked-by: Daniel Gomez <da.gomez@samsung.com>

FYI, I'm working on b4 check stuff mentioned in the other thread. I think this
is really nice but it would be awesome to also extend it to Ansible files using
the Ansible linter:

ansible-lint --help
{...}
--fix [WRITE_LIST]    Allow ansible-lint to perform auto-fixes, including YAML
reformatting

So, these changes with black/ansible-lint, etc make sense if:
1. Add b4 integration
2. Make CI run the scripts as well (make style, make check, etc)

But I suspect a minimal testing may be needed.

Thoughts?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 7/9] all: run black
  2025-07-31 12:57   ` Daniel Gomez
@ 2025-08-01  8:12     ` Daniel Gomez
  2025-08-01 12:55       ` Chuck Lever
  0 siblings, 1 reply; 19+ messages in thread
From: Daniel Gomez @ 2025-08-01  8:12 UTC (permalink / raw)
  To: Luis Chamberlain, Chuck Lever, Daniel Gomez, kdevops



On 31/07/2025 14.57, Daniel Gomez wrote:
> On 30/07/2025 08.01, Luis Chamberlain wrote:
>> Run black to fix tons of styling issues with tons of Python scripts.
>> In order to help bots ensure they don't add odd styling we need a
>> convention.
>>
>> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> 
> Acked-by: Daniel Gomez <da.gomez@samsung.com>
> 
> FYI, I'm working on b4 check stuff mentioned in the other thread. I think this
> is really nice but it would be awesome to also extend it to Ansible files using
> the Ansible linter:
> 
> ansible-lint --help
> {...}
> --fix [WRITE_LIST]    Allow ansible-lint to perform auto-fixes, including YAML
> reformatting
> 
> So, these changes with black/ansible-lint, etc make sense if:
> 1. Add b4 integration
> 2. Make CI run the scripts as well (make style, make check, etc)
> 
> But I suspect a minimal testing may be needed.
> 
> Thoughts?

I'll send an RFC with more details.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 7/9] all: run black
  2025-08-01  8:12     ` Daniel Gomez
@ 2025-08-01 12:55       ` Chuck Lever
  2025-08-01 16:29         ` Daniel Gomez
  0 siblings, 1 reply; 19+ messages in thread
From: Chuck Lever @ 2025-08-01 12:55 UTC (permalink / raw)
  To: Daniel Gomez; +Cc: Luis Chamberlain, Daniel Gomez, kdevops

On 8/1/25 4:12 AM, Daniel Gomez wrote:
> 
> 
> On 31/07/2025 14.57, Daniel Gomez wrote:
>> On 30/07/2025 08.01, Luis Chamberlain wrote:
>>> Run black to fix tons of styling issues with tons of Python scripts.
>>> In order to help bots ensure they don't add odd styling we need a
>>> convention.
>>>
>>> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
>>
>> Acked-by: Daniel Gomez <da.gomez@samsung.com>
>>
>> FYI, I'm working on b4 check stuff mentioned in the other thread. I think this
>> is really nice but it would be awesome to also extend it to Ansible files using
>> the Ansible linter:
>>
>> ansible-lint --help
>> {...}
>> --fix [WRITE_LIST]    Allow ansible-lint to perform auto-fixes, including YAML
>> reformatting

I use ansible-lint extensively before commit. Linting existing kdevops
files is still a bit of jungle, so ansible-lint would have to be
directed only at new files.


>> So, these changes with black/ansible-lint, etc make sense if:
>> 1. Add b4 integration
>> 2. Make CI run the scripts as well (make style, make check, etc)
>>
>> But I suspect a minimal testing may be needed.
>>
>> Thoughts?
> 
> I'll send an RFC with more details.

-- 
Chuck Lever

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 7/9] all: run black
  2025-08-01 12:55       ` Chuck Lever
@ 2025-08-01 16:29         ` Daniel Gomez
  2025-08-01 16:55           ` Chuck Lever
  0 siblings, 1 reply; 19+ messages in thread
From: Daniel Gomez @ 2025-08-01 16:29 UTC (permalink / raw)
  To: Chuck Lever; +Cc: Luis Chamberlain, Daniel Gomez, kdevops



On 01/08/2025 14.55, Chuck Lever wrote:
> On 8/1/25 4:12 AM, Daniel Gomez wrote:
>>
>>
>> On 31/07/2025 14.57, Daniel Gomez wrote:
>>> On 30/07/2025 08.01, Luis Chamberlain wrote:
>>>> Run black to fix tons of styling issues with tons of Python scripts.
>>>> In order to help bots ensure they don't add odd styling we need a
>>>> convention.
>>>>
>>>> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
>>>
>>> Acked-by: Daniel Gomez <da.gomez@samsung.com>
>>>
>>> FYI, I'm working on b4 check stuff mentioned in the other thread. I think this
>>> is really nice but it would be awesome to also extend it to Ansible files using
>>> the Ansible linter:
>>>
>>> ansible-lint --help
>>> {...}
>>> --fix [WRITE_LIST]    Allow ansible-lint to perform auto-fixes, including YAML
>>> reformatting
> 
> I use ansible-lint extensively before commit. Linting existing kdevops
> files is still a bit of jungle, so ansible-lint would have to be
> directed only at new files.

It's wild. I've sent an RFC with this:

git show --shortstat HEAD
234 files changed, 5308 insertions(+), 5098 deletions(-)
git show HEAD | grep -c '^@@'
735

Hopefully this will raise a bit the bar with the format. Next challenge is to
try to keep it.

>
> 
>>> So, these changes with black/ansible-lint, etc make sense if:
>>> 1. Add b4 integration
>>> 2. Make CI run the scripts as well (make style, make check, etc)
>>>
>>> But I suspect a minimal testing may be needed.
>>>
>>> Thoughts?
>>
>> I'll send an RFC with more details.
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 7/9] all: run black
  2025-08-01 16:29         ` Daniel Gomez
@ 2025-08-01 16:55           ` Chuck Lever
  0 siblings, 0 replies; 19+ messages in thread
From: Chuck Lever @ 2025-08-01 16:55 UTC (permalink / raw)
  To: Daniel Gomez; +Cc: Luis Chamberlain, Daniel Gomez, kdevops

On 8/1/25 12:29 PM, Daniel Gomez wrote:
> 
> 
> On 01/08/2025 14.55, Chuck Lever wrote:
>> On 8/1/25 4:12 AM, Daniel Gomez wrote:
>>>
>>>
>>> On 31/07/2025 14.57, Daniel Gomez wrote:
>>>> On 30/07/2025 08.01, Luis Chamberlain wrote:
>>>>> Run black to fix tons of styling issues with tons of Python scripts.
>>>>> In order to help bots ensure they don't add odd styling we need a
>>>>> convention.
>>>>>
>>>>> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
>>>>
>>>> Acked-by: Daniel Gomez <da.gomez@samsung.com>
>>>>
>>>> FYI, I'm working on b4 check stuff mentioned in the other thread. I think this
>>>> is really nice but it would be awesome to also extend it to Ansible files using
>>>> the Ansible linter:
>>>>
>>>> ansible-lint --help
>>>> {...}
>>>> --fix [WRITE_LIST]    Allow ansible-lint to perform auto-fixes, including YAML
>>>> reformatting
>>
>> I use ansible-lint extensively before commit. Linting existing kdevops
>> files is still a bit of jungle, so ansible-lint would have to be
>> directed only at new files.
> 
> It's wild. I've sent an RFC with this:
> 
> git show --shortstat HEAD
> 234 files changed, 5308 insertions(+), 5098 deletions(-)
> git show HEAD | grep -c '^@@'
> 735
> 
> Hopefully this will raise a bit the bar with the format. Next challenge is to
> try to keep it.

I generally try to fix things up as I encounter lint complaints, as I
am a fan of static analysis. Even though the "actual positive" rate can
be low, some discovered issues have been significant, IME.

However, there is risk in making changes when static checkers throw
warnings. Not infrequently, an otherwise innocent clean-up can result
in unintended behavior changes. So I try to keep the churn to a minimum
and test thoroughly.

There have been several enormous patch series that have come through in
the past week; difficult to review all of that. Even though tools like
Claude and black can generate a very high quantity of patches, there's
nothing that guarantees these are quality changes. I would like to see
things slow down a little so us humans can have a chance to paw through
them.

So +1 on the use of lint tools! But also let's tread carefully.


>>>> So, these changes with black/ansible-lint, etc make sense if:
>>>> 1. Add b4 integration
>>>> 2. Make CI run the scripts as well (make style, make check, etc)
>>>>
>>>> But I suspect a minimal testing may be needed.
>>>>
>>>> Thoughts?
>>>
>>> I'll send an RFC with more details.
>>
> 


-- 
Chuck Lever

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 8/9] devconfig: add automatic APT mirror fallback for Debian testing
  2025-07-30  6:41   ` Daniel Gomez
@ 2025-08-01 17:39     ` Luis Chamberlain
  0 siblings, 0 replies; 19+ messages in thread
From: Luis Chamberlain @ 2025-08-01 17:39 UTC (permalink / raw)
  To: Daniel Gomez; +Cc: Chuck Lever, Daniel Gomez, kdevops

On Wed, Jul 30, 2025 at 08:41:12AM +0200, Daniel Gomez wrote:
> On 30/07/2025 08.01, Luis Chamberlain wrote:
> > Debian testing (trixie) VMs can fail to provision when configured APT
> > mirrors become unavailable or unresponsive. This is particularly common
> > with local or regional mirrors that may have intermittent connectivity
> > issues.
> > 
> > This fix adds automatic mirror health checking specifically for Debian
> > testing systems. The implementation:
> > 
> > 1. Extracts the currently configured APT mirror hostname
> > 2. Tests connectivity to the mirror on port 80 with a 10 second timeout
> > 3. Falls back to official Debian mirrors if the test fails
> > 4. Backs up the original sources.list before making changes
> > 5. Updates the APT cache after switching mirrors
> > 6. Provides clear user notification about the fallback
> > 
> > The check only runs on Debian testing systems where devconfig_debian_testing
> > is set to true, avoiding any impact on stable Debian or other distributions.
> > 
> > This ensures that Debian testing VMs can successfully provision even when
> > the initially configured mirror is unavailable, improving reliability for
> > development and testing workflows.
> > 
> > Generated-by: Claude AI
> > Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> > ---
> >  .../devconfig/tasks/check-apt-mirrors.yml     | 63 +++++++++++++++++++
> >  playbooks/roles/devconfig/tasks/main.yml      |  8 +++
> >  .../debian-testing-fallback-sources.list      | 10 +++
> >  3 files changed, 81 insertions(+)
> >  create mode 100644 playbooks/roles/devconfig/tasks/check-apt-mirrors.yml
> >  create mode 100644 playbooks/roles/devconfig/templates/debian-testing-fallback-sources.list
> > 
> > diff --git a/playbooks/roles/devconfig/tasks/check-apt-mirrors.yml b/playbooks/roles/devconfig/tasks/check-apt-mirrors.yml
> > new file mode 100644
> > index 00000000..02e0c800
> > --- /dev/null
> > +++ b/playbooks/roles/devconfig/tasks/check-apt-mirrors.yml
> > @@ -0,0 +1,63 @@
> > +---
> > +# Only run mirror checks for Debian testing (trixie) where mirror issues are common
> > +- name: Extract current APT mirror hostname
> > +  shell: |
> > +    grep -E "^deb\s+http" /etc/apt/sources.list | head -1 | awk '{print $2}' | sed 's|http://||' | cut -d'/' -f1
> > +  register: apt_mirror_host
> > +  changed_when: false
> > +  ignore_errors: yes
> > +
> > +- name: Check connectivity to current APT mirror
> > +  wait_for:
> > +    host: "{{ apt_mirror_host.stdout }}"
> > +    port: 80
> > +    timeout: 10
> > +  register: mirror_connectivity
> > +  ignore_errors: yes
> > +  when: apt_mirror_host.stdout != ""
> > +
> > +- name: Display mirror check results
> > +  debug:
> > +    msg: |
> > +      Current APT mirror: {{ apt_mirror_host.stdout | default('Not found') }}
> > +      Mirror connectivity: {{ 'OK' if mirror_connectivity is not failed else 'FAILED' }}
> > +  when: apt_mirror_host.stdout != ""
> > +
> > +- name: Fall back to official Debian mirrors if current mirror fails
> > +  block:
> > +    - name: Backup current sources.list
> > +      copy:
> > +        src: /etc/apt/sources.list
> > +        dest: /etc/apt/sources.list.backup
> > +        remote_src: yes
> > +      become: yes
> > +
> > +    - name: Apply Debian testing fallback sources
> > +      template:
> > +        src: debian-testing-fallback-sources.list
> > +        dest: /etc/apt/sources.list
> > +        owner: root
> > +        group: root
> > +        mode: '0644'
> > +      become: yes
> > +
> > +    - name: Update APT cache after mirror change
> > +      apt:
> > +        update_cache: yes
> > +        cache_valid_time: 0
> > +      become: yes
> > +
> > +    - name: Inform user about mirror fallback
> > +      debug:
> > +        msg: |
> > +          WARNING: The configured APT mirror '{{ apt_mirror_host.stdout }}' is not accessible.
> > +          Falling back to official Debian testing mirrors:
> > +          - deb.debian.org for main packages
> > +          - security.debian.org for security updates
> > +
> > +          This may result in slower package downloads depending on your location.
> > +          Consider configuring a local mirror for better performance.
> > +
> > +  when:
> > +    - apt_mirror_host.stdout != ""
> > +    - mirror_connectivity is failed
> > diff --git a/playbooks/roles/devconfig/tasks/main.yml b/playbooks/roles/devconfig/tasks/main.yml
> > index 656d5389..ceb0f2e8 100644
> > --- a/playbooks/roles/devconfig/tasks/main.yml
> > +++ b/playbooks/roles/devconfig/tasks/main.yml
> > @@ -30,6 +30,14 @@
> >    tags: hostname
> >  
> >  # Distro specific
> > +
> > +# Check and fix APT mirrors for Debian testing before installing dependencies
> > +- name: Check and fix APT mirrors for Debian testing
> > +  include_tasks: check-apt-mirrors.yml
> > +  when:
> > +    - devconfig_debian_testing is defined
> > +    - devconfig_debian_testing | bool
> > +
> >  - name: Install dependencies
> >    ansible.builtin.include_tasks: install-deps/main.yml
> >    tags: ['vars', 'vars_simple']
> > diff --git a/playbooks/roles/devconfig/templates/debian-testing-fallback-sources.list b/playbooks/roles/devconfig/templates/debian-testing-fallback-sources.list
> > new file mode 100644
> > index 00000000..456ed60f
> > --- /dev/null
> > +++ b/playbooks/roles/devconfig/templates/debian-testing-fallback-sources.list
> > @@ -0,0 +1,10 @@
> > +deb http://deb.debian.org/debian testing main contrib non-free non-free-firmware
> > +deb-src http://deb.debian.org/debian testing main contrib non-free non-free-firmware
> > +
> > +# Security updates
> > +deb http://security.debian.org/debian-security testing-security main contrib non-free non-free-firmware
> > +deb-src http://security.debian.org/debian-security testing-security main contrib non-free non-free-firmware
> > +
> > +# Updates (if available for testing)
> > +deb http://deb.debian.org/debian testing-updates main contrib non-free non-free-firmware
> > +deb-src http://deb.debian.org/debian testing-updates main contrib non-free non-free-firmware
> > \ No newline at end of file
> 
> Debian has switched to a new sources format [1]. They have also provided a
> manual conversion for users with "apt modernize-sources". So it makes sense to
> start using the new format directly here.
> 
> [1] https://wiki.debian.org/SourcesList

I'll post a v3 with also enhancements to the hop1 inference.

  Luis

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2025-08-01 17:39 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-30  6:01 [PATCH v2 0/9] kdevops: add support for A/B testing Luis Chamberlain
2025-07-30  6:01 ` [PATCH v2 1/9] roles/guestfs: add missing bootlinux_9p: False Luis Chamberlain
2025-07-30 14:17   ` Chuck Lever
2025-07-30  6:01 ` [PATCH v2 2/9] Makefile: suppress Ansible warnings during configuration generation Luis Chamberlain
2025-07-30  6:22   ` Daniel Gomez
2025-07-30  6:01 ` [PATCH v2 3/9] playbooks: few space cleanups Luis Chamberlain
2025-07-30  6:01 ` [PATCH v2 4/9] style: add extensive code formatting checks to make style Luis Chamberlain
2025-07-30  6:01 ` [PATCH v2 5/9] Makefile: move styling to scripts/style.Makefile Luis Chamberlain
2025-07-30  6:01 ` [PATCH v2 6/9] CLAUDE.md: add instrucitons to verify commit Luis Chamberlain
2025-07-30  6:01 ` [PATCH v2 7/9] all: run black Luis Chamberlain
2025-07-31 12:57   ` Daniel Gomez
2025-08-01  8:12     ` Daniel Gomez
2025-08-01 12:55       ` Chuck Lever
2025-08-01 16:29         ` Daniel Gomez
2025-08-01 16:55           ` Chuck Lever
2025-07-30  6:01 ` [PATCH v2 8/9] devconfig: add automatic APT mirror fallback for Debian testing Luis Chamberlain
2025-07-30  6:41   ` Daniel Gomez
2025-08-01 17:39     ` Luis Chamberlain
2025-07-30  6:01 ` [PATCH v2 9/9] bootlinux: add support for A/B kernel testing Luis Chamberlain

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox