public inbox for kdevops@lists.linux.dev
 help / color / mirror / Atom feed
* [PATCH v2 0/8] nfs: few fixes and enhancements
@ 2025-10-03 20:19 Chuck Lever
  2025-10-03 20:19 ` [PATCH v2 1/8] defconfigs: add NFS testing configurations Chuck Lever
                   ` (8 more replies)
  0 siblings, 9 replies; 10+ messages in thread
From: Chuck Lever @ 2025-10-03 20:19 UTC (permalink / raw)
  To: kdevops; +Cc: Luis Chamberlain, Chuck Lever

From: Chuck Lever <chuck.lever@oracle.com>

Original cover:

In prepration for talking about NFS tests at the MSST conference today
I figured I'd give a run to all NFS tests. I ran out of time but at
least this plumbed quite a bit of the stuff to get some results out.

The iSCSI stuff is likely not correct, and can be dropped. So, feel
free to take in only what makes sense and drop whatever silly thing
you see.

Updates:

I'm reposting because, strangely, I never received these patches in
my inbox. I pulled this series from lore using "b4 am" so we can
keep reviewing.

For "devconfig: exclude nfsd from journal upload client
configuration", I wonder if instead of "nfsd", the new checks
should look for the "service" group, which usually includes nfsd,
the SMB server, the iscsi target, and the Kerberos KDC. Any opinion
on that?

I've dropped the iSCSI-specific changes. Except for a couple of
nits, the remaining patches look great to me.

The original idea for pNFS block testing was that a separate iSCSI
target was to be set up: either it is outside the kdevops test
network, or it is enabled by setting CONFIG_KDEVOPS_ENABLE_ISCSI.
Then change the kdevops nfsd server to use that iSCSI target by
changing the "Persistent storage for exported file systems" setting
to "iSCSI".

It's easier overall if the iSCSI target host is separate from the
kdevops nfsd host. iSCSI loopback is not as performant and is less
reliable, I've found -- and maybe not supported on every Linux
distribution we want to run kdevops on, IIRC.

The gitr, nfstest, and pynfs workflows should all be able to use
pNFS block, and indeed the latter two have additional tests
especially for pNFS block layout. So fstests-specific pNFS block
patches don't seem right to me.

Luis, if you believe I missed something and need to revisit one or
more of the patches I've left out, please don't hesitate to bring
it up.

Luis Chamberlain (8):
  defconfigs: add NFS testing configurations
  devconfig: exclude nfsd from journal upload client configuration
  iscsi: add missing initiator packages for Debian
  nfsd_add_export: fix become method for filesystem formatting
  workflows: fstests: fix incorrect pNFS export configuration
  nfstest: add results visualization support
  fstests: add soak duration to nfs template
  pynfs: add visualization support for test results

 defconfigs/nfs-fstests                        |   38 +
 defconfigs/nfs-gitr                           |   38 +
 defconfigs/nfs-ltp                            |   31 +
 defconfigs/nfstests                           |   30 +
 defconfigs/pynfs-pnfs-block                   |   34 +
 playbooks/roles/devconfig/tasks/main.yml      |    3 +
 playbooks/roles/fstests/tasks/main.yml        |    5 +-
 .../roles/fstests/templates/nfs/nfs.config    |    4 +
 playbooks/roles/iscsi/vars/Debian.yml         |    3 +
 .../nfsd_add_export/tasks/storage/local.yml   |    3 +-
 scripts/workflows/pynfs/visualize_results.py  | 1014 +++++++++++++++++
 workflows/Makefile                            |    4 +
 workflows/nfstest/Makefile                    |    1 +
 .../nfstest/scripts/generate_nfstest_html.py  |  783 +++++++++++++
 .../nfstest/scripts/parse_nfstest_results.py  |  277 +++++
 .../scripts/visualize_nfstest_results.sh      |   61 +
 workflows/pynfs/Makefile                      |   17 +-
 17 files changed, 2341 insertions(+), 5 deletions(-)
 create mode 100644 defconfigs/nfs-fstests
 create mode 100644 defconfigs/nfs-gitr
 create mode 100644 defconfigs/nfs-ltp
 create mode 100644 defconfigs/nfstests
 create mode 100644 defconfigs/pynfs-pnfs-block
 create mode 100755 scripts/workflows/pynfs/visualize_results.py
 create mode 100755 workflows/nfstest/scripts/generate_nfstest_html.py
 create mode 100755 workflows/nfstest/scripts/parse_nfstest_results.py
 create mode 100755 workflows/nfstest/scripts/visualize_nfstest_results.sh

-- 
2.51.0


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/8] defconfigs: add NFS testing configurations
  2025-10-03 20:19 [PATCH v2 0/8] nfs: few fixes and enhancements Chuck Lever
@ 2025-10-03 20:19 ` Chuck Lever
  2025-10-03 20:19 ` [PATCH v2 2/8] devconfig: exclude nfsd from journal upload client configuration Chuck Lever
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Chuck Lever @ 2025-10-03 20:19 UTC (permalink / raw)
  To: kdevops; +Cc: Luis Chamberlain

From: Luis Chamberlain <mcgrof@kernel.org>

Add five defconfigs for NFS testing:

NFS filesystem testing configurations:
- nfs-fstests: Filesystem testing suite (fstests) on NFS mounts
  * Enables pNFS section for testing with pNFS-capable exports
  * Tests NFSv4.2, v4.1 with pNFS export capability
- nfs-gitr: Git regression testing on NFS mounts
  * Tests git operations on NFS with pNFS export capability
  * Enables NFSv4.2 and pNFS sections for comprehensive coverage
- nfs-ltp: Linux Test Project suite on NFS mounts
  * General test suite that runs on NFS (not pNFS-specific)

NFS protocol testing configurations:
- nfstests: NFStest protocol conformance suite
  * Tests NFS protocol interoperability
  * General NFS testing (not pNFS-specific)
- pynfs-pnfs-block: PyNFS pNFS block layout protocol testing
  * Specifically tests pNFS block layout protocol conformance
  * This is the only configuration that actually tests pNFS-specific features

All configurations:
- Build kernels from Linus' tree for latest development
- Use kdevops-provided NFS server for consistent test environment
- Enable systemd journal remote for enhanced debugging
- Support 9P filesystem for efficient host-guest kernel development

Note: Most tests run on NFS mounts that may have pNFS capability enabled
on the server side, but only pynfs-pnfs-block specifically tests pNFS
protocol features.

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 defconfigs/nfs-fstests      | 38 +++++++++++++++++++++++++++++++++++++
 defconfigs/nfs-gitr         | 38 +++++++++++++++++++++++++++++++++++++
 defconfigs/nfs-ltp          | 31 ++++++++++++++++++++++++++++++
 defconfigs/nfstests         | 30 +++++++++++++++++++++++++++++
 defconfigs/pynfs-pnfs-block | 34 +++++++++++++++++++++++++++++++++
 5 files changed, 171 insertions(+)
 create mode 100644 defconfigs/nfs-fstests
 create mode 100644 defconfigs/nfs-gitr
 create mode 100644 defconfigs/nfs-ltp
 create mode 100644 defconfigs/nfstests
 create mode 100644 defconfigs/pynfs-pnfs-block

diff --git a/defconfigs/nfs-fstests b/defconfigs/nfs-fstests
new file mode 100644
index 000000000000..03dc2e64237f
--- /dev/null
+++ b/defconfigs/nfs-fstests
@@ -0,0 +1,38 @@
+# pNFS configuration for filesystem testing with fstests
+
+# Use libvirt/QEMU for virtualization
+CONFIG_GUESTFS=y
+CONFIG_LIBVIRT=y
+
+# Enable workflows
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOW_LINUX_CUSTOM=y
+
+# Linux kernel building with 9P for development
+CONFIG_BOOTLINUX=y
+CONFIG_BOOTLINUX_9P=y
+CONFIG_BOOTLINUX_LINUS=y
+CONFIG_BOOTLINUX_TREE_LINUS=y
+
+# Enable testing workflows
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+
+# Enable fstests workflow with pNFS testing
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_FSTESTS=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_FSTESTS=y
+CONFIG_FSTESTS_NFS=y
+CONFIG_FSTESTS_FSTYP="nfs"
+
+# Enable manual coverage for NFS to select pNFS
+CONFIG_FSTESTS_NFS_MANUAL_COVERAGE=y
+CONFIG_FSTESTS_NFS_SECTION_PNFS=y
+CONFIG_FSTESTS_NFS_SECTION_V42=y
+CONFIG_FSTESTS_NFS_SECTION_V41=y
+
+# Use kdevops NFS server for fstests
+CONFIG_FSTESTS_USE_KDEVOPS_NFSD=y
+
+# Enable systemd journal remote for debugging
+CONFIG_DEVCONFIG_ENABLE_SYSTEMD_JOURNAL_REMOTE=y
diff --git a/defconfigs/nfs-gitr b/defconfigs/nfs-gitr
new file mode 100644
index 000000000000..2c097d01951d
--- /dev/null
+++ b/defconfigs/nfs-gitr
@@ -0,0 +1,38 @@
+# NFS configuration for git regression testing
+# Tests git operations on NFS mounts with pNFS export capability
+
+# Use libvirt/QEMU for virtualization
+CONFIG_GUESTFS=y
+CONFIG_LIBVIRT=y
+
+# Enable workflows
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOW_LINUX_CUSTOM=y
+
+# Linux kernel building with 9P for development
+CONFIG_BOOTLINUX=y
+CONFIG_BOOTLINUX_9P=y
+CONFIG_BOOTLINUX_LINUS=y
+CONFIG_BOOTLINUX_TREE_LINUS=y
+
+# Enable testing workflows
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+
+# Enable gitr workflow with pNFS testing
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_GITR=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_GITR=y
+
+# Enable pNFS section for gitr workflow
+CONFIG_GITR_NFS_SECTION_PNFS=y
+CONFIG_GITR_NFS_SECTION_V42=y
+
+# Use kdevops NFS server
+CONFIG_GITR_USE_KDEVOPS_NFSD=y
+
+# Enable kdevops NFS server setup
+CONFIG_KDEVOPS_SETUP_NFSD=y
+
+# Enable systemd journal remote for debugging
+CONFIG_DEVCONFIG_ENABLE_SYSTEMD_JOURNAL_REMOTE=y
diff --git a/defconfigs/nfs-ltp b/defconfigs/nfs-ltp
new file mode 100644
index 000000000000..4562874efa0c
--- /dev/null
+++ b/defconfigs/nfs-ltp
@@ -0,0 +1,31 @@
+# pNFS configuration for Linux Test Project (LTP)
+# Note: LTP doesn't specifically test pNFS, but can run on pNFS mounts
+
+# Use libvirt/QEMU for virtualization
+CONFIG_GUESTFS=y
+CONFIG_LIBVIRT=y
+
+# Enable workflows
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOW_LINUX_CUSTOM=y
+
+# Linux kernel building with 9P for development
+CONFIG_BOOTLINUX=y
+CONFIG_BOOTLINUX_9P=y
+CONFIG_BOOTLINUX_LINUS=y
+CONFIG_BOOTLINUX_TREE_LINUS=y
+
+# Enable testing workflows
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+
+# Enable LTP workflow
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_LTP=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_LTP=y
+
+# Use kdevops-provided NFS server for pNFS mount
+CONFIG_KDEVOPS_SETUP_NFSD=y
+
+# Enable systemd journal remote for debugging
+CONFIG_DEVCONFIG_ENABLE_SYSTEMD_JOURNAL_REMOTE=y
\ No newline at end of file
diff --git a/defconfigs/nfstests b/defconfigs/nfstests
new file mode 100644
index 000000000000..543c39f3411d
--- /dev/null
+++ b/defconfigs/nfstests
@@ -0,0 +1,30 @@
+# NFS configuration for NFStest testing suite
+
+# Use libvirt/QEMU for virtualization
+CONFIG_GUESTFS=y
+CONFIG_LIBVIRT=y
+
+# Enable workflows
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOW_LINUX_CUSTOM=y
+
+# Linux kernel building with 9P for development
+CONFIG_BOOTLINUX=y
+CONFIG_BOOTLINUX_9P=y
+CONFIG_BOOTLINUX_LINUS=y
+CONFIG_BOOTLINUX_TREE_LINUS=y
+
+# Enable testing workflows
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+
+# Enable nfstest workflow
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_NFSTEST=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_NFSTEST=y
+
+# Use kdevops-provided NFS server
+CONFIG_KDEVOPS_SETUP_NFSD=y
+
+# Enable systemd journal remote for debugging
+CONFIG_DEVCONFIG_ENABLE_SYSTEMD_JOURNAL_REMOTE=y
diff --git a/defconfigs/pynfs-pnfs-block b/defconfigs/pynfs-pnfs-block
new file mode 100644
index 000000000000..595a850e3796
--- /dev/null
+++ b/defconfigs/pynfs-pnfs-block
@@ -0,0 +1,34 @@
+# PyNFS configuration for pNFS block layout protocol testing
+# Specifically tests pNFS block layout protocol conformance
+
+# Use libvirt/QEMU for virtualization
+CONFIG_GUESTFS=y
+CONFIG_LIBVIRT=y
+
+# Enable workflows
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOW_LINUX_CUSTOM=y
+
+# Linux kernel building with 9P for development
+CONFIG_BOOTLINUX=y
+CONFIG_BOOTLINUX_9P=y
+CONFIG_BOOTLINUX_LINUS=y
+CONFIG_BOOTLINUX_TREE_LINUS=y
+
+# Enable testing workflows
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+
+# Enable pynfs workflow for pNFS protocol testing
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_PYNFS=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_PYNFS=y
+
+# Enable pNFS block layout tests
+CONFIG_PYNFS_PNFS_BLOCK=y
+
+# Use kdevops-provided NFS server
+CONFIG_KDEVOPS_SETUP_NFSD=y
+
+# Enable systemd journal remote for debugging
+CONFIG_DEVCONFIG_ENABLE_SYSTEMD_JOURNAL_REMOTE=y
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/8] devconfig: exclude nfsd from journal upload client configuration
  2025-10-03 20:19 [PATCH v2 0/8] nfs: few fixes and enhancements Chuck Lever
  2025-10-03 20:19 ` [PATCH v2 1/8] defconfigs: add NFS testing configurations Chuck Lever
@ 2025-10-03 20:19 ` Chuck Lever
  2025-10-03 20:19 ` [PATCH v2 3/8] iscsi: add missing initiator packages for Debian Chuck Lever
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Chuck Lever @ 2025-10-03 20:19 UTC (permalink / raw)
  To: kdevops; +Cc: Luis Chamberlain

From: Luis Chamberlain <mcgrof@kernel.org>

The NFS server (nfsd group) should not run systemd-journal-upload
service as it acts as the journal remote server, not a client.
The journal upload service is for clients that send logs to the
remote server.

Add condition to exclude 'nfsd' group members from journal upload
client configuration tasks in both devconfig and fstests roles.

This fixes the failure where systemd-journal-upload.service was
not found on NFS server nodes, as they should only run the
journal remote receiver service, not the upload client.

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 playbooks/roles/devconfig/tasks/main.yml | 3 +++
 playbooks/roles/fstests/tasks/main.yml   | 1 +
 2 files changed, 4 insertions(+)

diff --git a/playbooks/roles/devconfig/tasks/main.yml b/playbooks/roles/devconfig/tasks/main.yml
index ae16a6982322..3499f74443ad 100644
--- a/playbooks/roles/devconfig/tasks/main.yml
+++ b/playbooks/roles/devconfig/tasks/main.yml
@@ -565,6 +565,7 @@
     lstrip_blocks: true
   when:
     - devconfig_enable_systemd_journal_remote|bool
+    - "'nfsd' not in group_names"
 
 - name: Enable and restart systemd-journal-upload.service on the client
   tags: ["journal", "journal-upload-restart"]
@@ -578,6 +579,7 @@
     daemon_reload: true
   when:
     - devconfig_enable_systemd_journal_remote|bool
+    - "'nfsd' not in group_names"
 
 - name: Ensure systemd-journal-remote.service is running on the server
   tags: ["journal-status"]
@@ -602,6 +604,7 @@
     state: started
   when:
     - devconfig_enable_systemd_journal_remote|bool
+    - "'nfsd' not in group_names"
 
 - name: Set up the client /etc/systemd/timesyncd.conf
   tags: ["timesyncd"]
diff --git a/playbooks/roles/fstests/tasks/main.yml b/playbooks/roles/fstests/tasks/main.yml
index f12bfdaeba03..c3dcecf538b8 100644
--- a/playbooks/roles/fstests/tasks/main.yml
+++ b/playbooks/roles/fstests/tasks/main.yml
@@ -1137,6 +1137,7 @@
     state: started
   when:
     - devconfig_enable_systemd_journal_remote|bool
+    - "'nfsd' not in group_names"
 
 - name: Hint to watchdog tests are about to kick off
   ansible.builtin.file:
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/8] iscsi: add missing initiator packages for Debian
  2025-10-03 20:19 [PATCH v2 0/8] nfs: few fixes and enhancements Chuck Lever
  2025-10-03 20:19 ` [PATCH v2 1/8] defconfigs: add NFS testing configurations Chuck Lever
  2025-10-03 20:19 ` [PATCH v2 2/8] devconfig: exclude nfsd from journal upload client configuration Chuck Lever
@ 2025-10-03 20:19 ` Chuck Lever
  2025-10-03 20:19 ` [PATCH v2 4/8] nfsd_add_export: fix become method for filesystem formatting Chuck Lever
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Chuck Lever @ 2025-10-03 20:19 UTC (permalink / raw)
  To: kdevops; +Cc: Luis Chamberlain

From: Luis Chamberlain <mcgrof@kernel.org>

The Debian vars file was missing iscsi_initiator_packages
definition, causing failures when setting up iSCSI initiators
for pNFS block layout testing.

Add open-iscsi package to match the pattern used in SUSE.

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 playbooks/roles/iscsi/vars/Debian.yml | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/playbooks/roles/iscsi/vars/Debian.yml b/playbooks/roles/iscsi/vars/Debian.yml
index 3495468e324d..606576c4f6b8 100644
--- a/playbooks/roles/iscsi/vars/Debian.yml
+++ b/playbooks/roles/iscsi/vars/Debian.yml
@@ -3,4 +3,7 @@ iscsi_target_packages:
   - targetcli-fb
   - sg3-utils
 
+iscsi_initiator_packages:
+  - open-iscsi
+
 iscsi_target_service_name: targetclid.socket
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 4/8] nfsd_add_export: fix become method for filesystem formatting
  2025-10-03 20:19 [PATCH v2 0/8] nfs: few fixes and enhancements Chuck Lever
                   ` (2 preceding siblings ...)
  2025-10-03 20:19 ` [PATCH v2 3/8] iscsi: add missing initiator packages for Debian Chuck Lever
@ 2025-10-03 20:19 ` Chuck Lever
  2025-10-03 20:19 ` [PATCH v2 5/8] workflows: fstests: fix incorrect pNFS export configuration Chuck Lever
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Chuck Lever @ 2025-10-03 20:19 UTC (permalink / raw)
  To: kdevops; +Cc: Luis Chamberlain

From: Luis Chamberlain <mcgrof@kernel.org>

The filesystem module was unable to find mkfs.xfs when using
'become_flags: "su - -c"' which changes the environment.
Change to use standard 'become_method: sudo' to preserve
the PATH and allow the module to find the mkfs.xfs executable.

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 playbooks/roles/nfsd_add_export/tasks/storage/local.yml | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/playbooks/roles/nfsd_add_export/tasks/storage/local.yml b/playbooks/roles/nfsd_add_export/tasks/storage/local.yml
index c366a13ff546..aaaf1de17993 100644
--- a/playbooks/roles/nfsd_add_export/tasks/storage/local.yml
+++ b/playbooks/roles/nfsd_add_export/tasks/storage/local.yml
@@ -11,8 +11,7 @@
 
 - name: Format new volume for {{ export_fstype }}
   become: true
-  become_flags: "su - -c"
-  become_method: ansible.builtin.sudo
+  become_method: sudo
   delegate_to: "{{ server_host }}"
   community.general.filesystem:
     fstype: "{{ export_fstype }}"
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 5/8] workflows: fstests: fix incorrect pNFS export configuration
  2025-10-03 20:19 [PATCH v2 0/8] nfs: few fixes and enhancements Chuck Lever
                   ` (3 preceding siblings ...)
  2025-10-03 20:19 ` [PATCH v2 4/8] nfsd_add_export: fix become method for filesystem formatting Chuck Lever
@ 2025-10-03 20:19 ` Chuck Lever
  2025-10-03 20:19 ` [PATCH v2 6/8] nfstest: add results visualization support Chuck Lever
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Chuck Lever @ 2025-10-03 20:19 UTC (permalink / raw)
  To: kdevops; +Cc: Luis Chamberlain

From: Luis Chamberlain <mcgrof@kernel.org>

The fstests role was incorrectly setting export_pnfs=true for pNFS
test sections. The kernel pNFS export option requires proper block
layout backend configuration (like iSCSI targets) which isn't present.

Regular NFSv4.1/4.2 testing doesn't need this option. The incorrect
configuration caused NFS server lockd failures and made NFS mounts
hang indefinitely.

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 playbooks/roles/fstests/tasks/main.yml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/playbooks/roles/fstests/tasks/main.yml b/playbooks/roles/fstests/tasks/main.yml
index c3dcecf538b8..f0fbcda48345 100644
--- a/playbooks/roles/fstests/tasks/main.yml
+++ b/playbooks/roles/fstests/tasks/main.yml
@@ -758,7 +758,7 @@
     export_options: "{{ nfsd_export_options }}"
     export_fstype: "{{ fstests_nfs_export_fstype }}"
     export_size: 20g
-    export_pnfs: "{{ fstests_nfs_section_pnfs | bool }}"
+    export_pnfs: false
   when:
     - fstests_fstyp == "nfs"
     - fstests_nfs_use_kdevops_nfsd|bool
@@ -772,7 +772,7 @@
     export_options: "{{ nfsd_export_options }}"
     export_fstype: "{{ fstests_nfs_export_fstype }}"
     export_size: 30g
-    export_pnfs: "{{ fstests_nfs_section_pnfs | bool }}"
+    export_pnfs: false
   when:
     - fstests_fstyp == "nfs"
     - fstests_nfs_use_kdevops_nfsd|bool
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 6/8] nfstest: add results visualization support
  2025-10-03 20:19 [PATCH v2 0/8] nfs: few fixes and enhancements Chuck Lever
                   ` (4 preceding siblings ...)
  2025-10-03 20:19 ` [PATCH v2 5/8] workflows: fstests: fix incorrect pNFS export configuration Chuck Lever
@ 2025-10-03 20:19 ` Chuck Lever
  2025-10-03 20:19 ` [PATCH v2 7/8] fstests: add soak duration to nfs template Chuck Lever
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Chuck Lever @ 2025-10-03 20:19 UTC (permalink / raw)
  To: kdevops; +Cc: Luis Chamberlain

From: Luis Chamberlain <mcgrof@kernel.org>

Add make nfstests-results-visualize target to generate HTML visualization
of NFS test results. This processes test logs from workflows/nfstest/results/last-run
and creates a self-contained HTML report with charts and statistics.

The visualization includes:
- Overall test summary with pass/fail statistics
- Interactive pie charts for test results
- Detailed results grouped by NFS protocol version
- Collapsible sections for easy navigation
- Test configuration details

Usage: make nfstests-results-visualize
Output: workflows/nfstest/results/html/

This makes it easy to analyze test results and share them by simply
copying the html directory via scp.

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 workflows/Makefile                            |   4 +
 workflows/nfstest/Makefile                    |   1 +
 .../nfstest/scripts/generate_nfstest_html.py  | 783 ++++++++++++++++++
 .../nfstest/scripts/parse_nfstest_results.py  | 277 +++++++
 .../scripts/visualize_nfstest_results.sh      |  61 ++
 5 files changed, 1126 insertions(+)
 create mode 100755 workflows/nfstest/scripts/generate_nfstest_html.py
 create mode 100755 workflows/nfstest/scripts/parse_nfstest_results.py
 create mode 100755 workflows/nfstest/scripts/visualize_nfstest_results.sh

diff --git a/workflows/Makefile b/workflows/Makefile
index 05c75a2d711b..58b56688f348 100644
--- a/workflows/Makefile
+++ b/workflows/Makefile
@@ -50,6 +50,10 @@ ifeq (y,$(CONFIG_KDEVOPS_WORKFLOW_ENABLE_NFSTEST))
 include workflows/nfstest/Makefile
 endif # CONFIG_KDEVOPS_WORKFLOW_ENABLE_NFSTEST == y
 
+# Always available nfstest visualization target
+nfstests-results-visualize:
+	$(Q)bash $(shell pwd)/workflows/nfstest/scripts/visualize_nfstest_results.sh
+
 ifeq (y,$(CONFIG_KDEVOPS_WORKFLOW_ENABLE_SYSBENCH))
 include workflows/sysbench/Makefile
 endif # CONFIG_KDEVOPS_WORKFLOW_ENABLE_SYSBENCH == y
diff --git a/workflows/nfstest/Makefile b/workflows/nfstest/Makefile
index fca7a51af7ab..4bd8e147adb0 100644
--- a/workflows/nfstest/Makefile
+++ b/workflows/nfstest/Makefile
@@ -99,6 +99,7 @@ nfstest-help-menu:
 	@echo "nfstest options:"
 	@echo "nfstest                              - Git clone nfstest and install it"
 	@echo "nfstest-{baseline,dev}               - Run selected nfstests on baseline or dev hosts and collect results"
+	@echo "nfstests-results-visualize           - Generate HTML visualization of test results"
 	@echo ""
 
 HELP_TARGETS += nfstest-help-menu
diff --git a/workflows/nfstest/scripts/generate_nfstest_html.py b/workflows/nfstest/scripts/generate_nfstest_html.py
new file mode 100755
index 000000000000..277992aeeee2
--- /dev/null
+++ b/workflows/nfstest/scripts/generate_nfstest_html.py
@@ -0,0 +1,783 @@
+#!/usr/bin/env python3
+"""
+Generate HTML visualization for NFS test results
+"""
+
+import json
+import os
+import sys
+import glob
+import base64
+from datetime import datetime
+from pathlib import Path
+from collections import defaultdict
+
+# Try to import matplotlib, but make it optional
+try:
+    import matplotlib
+
+    matplotlib.use("Agg")
+    import matplotlib.pyplot as plt
+    import matplotlib.patches as mpatches
+
+    HAS_MATPLOTLIB = True
+except ImportError:
+    HAS_MATPLOTLIB = False
+    print(
+        "Warning: matplotlib not found. Graphs will not be generated.", file=sys.stderr
+    )
+
+HTML_TEMPLATE = """
+<!DOCTYPE html>
+<html lang="en">
+<head>
+    <meta charset="UTF-8">
+    <meta name="viewport" content="width=device-width, initial-scale=1.0">
+    <title>NFS Test Results - {timestamp}</title>
+    <style>
+        :root {{
+            --primary-color: #2c3e50;
+            --secondary-color: #3498db;
+            --success-color: #27ae60;
+            --danger-color: #e74c3c;
+            --warning-color: #f39c12;
+            --light-bg: #ecf0f1;
+            --card-shadow: 0 2px 4px rgba(0,0,0,0.1);
+        }}
+
+        * {{
+            margin: 0;
+            padding: 0;
+            box-sizing: border-box;
+        }}
+
+        body {{
+            font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
+            line-height: 1.6;
+            color: #333;
+            background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
+            min-height: 100vh;
+            padding: 20px;
+        }}
+
+        .container {{
+            max-width: 1400px;
+            margin: 0 auto;
+            background: white;
+            border-radius: 12px;
+            overflow: hidden;
+            box-shadow: 0 20px 60px rgba(0,0,0,0.3);
+        }}
+
+        .header {{
+            background: var(--primary-color);
+            color: white;
+            padding: 40px;
+            text-align: center;
+            position: relative;
+            overflow: hidden;
+        }}
+
+        .header::before {{
+            content: '';
+            position: absolute;
+            top: 0;
+            left: 0;
+            right: 0;
+            bottom: 0;
+            background: linear-gradient(135deg, rgba(52, 152, 219, 0.1), rgba(46, 204, 113, 0.1));
+        }}
+
+        h1 {{
+            margin: 0;
+            font-size: 2.5em;
+            position: relative;
+            z-index: 1;
+        }}
+
+        .subtitle {{
+            margin-top: 10px;
+            opacity: 0.9;
+            font-size: 1.1em;
+            position: relative;
+            z-index: 1;
+        }}
+
+        .content {{
+            padding: 40px;
+        }}
+
+        .summary-grid {{
+            display: grid;
+            grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
+            gap: 20px;
+            margin-bottom: 40px;
+        }}
+
+        .summary-card {{
+            background: white;
+            border: 1px solid #e0e0e0;
+            padding: 25px;
+            border-radius: 10px;
+            text-align: center;
+            transition: all 0.3s ease;
+            box-shadow: var(--card-shadow);
+        }}
+
+        .summary-card:hover {{
+            transform: translateY(-5px);
+            box-shadow: 0 5px 20px rgba(0,0,0,0.15);
+        }}
+
+        .summary-card.success {{
+            background: linear-gradient(135deg, #667eea20 0%, #27ae6020 100%);
+            border-color: var(--success-color);
+        }}
+
+        .summary-card.danger {{
+            background: linear-gradient(135deg, #e74c3c20 0%, #c0392b20 100%);
+            border-color: var(--danger-color);
+        }}
+
+        .summary-card .value {{
+            font-size: 2.5em;
+            font-weight: bold;
+            margin: 10px 0;
+        }}
+
+        .summary-card.success .value {{
+            color: var(--success-color);
+        }}
+
+        .summary-card.danger .value {{
+            color: var(--danger-color);
+        }}
+
+        .summary-card .label {{
+            color: #7f8c8d;
+            font-size: 0.95em;
+            text-transform: uppercase;
+            letter-spacing: 1px;
+        }}
+
+        .test-suite {{
+            background: white;
+            border: 1px solid #e0e0e0;
+            border-radius: 10px;
+            margin-bottom: 30px;
+            overflow: hidden;
+            box-shadow: var(--card-shadow);
+        }}
+
+        .suite-header {{
+            background: linear-gradient(135deg, var(--secondary-color), #5dade2);
+            color: white;
+            padding: 20px 30px;
+            cursor: pointer;
+            position: relative;
+            transition: all 0.3s ease;
+        }}
+
+        .suite-header:hover {{
+            background: linear-gradient(135deg, #2980b9, var(--secondary-color));
+        }}
+
+        .suite-header h2 {{
+            margin: 0 0 10px 0;
+            font-size: 1.5em;
+            display: flex;
+            align-items: center;
+            justify-content: space-between;
+        }}
+
+        .suite-stats {{
+            display: flex;
+            gap: 20px;
+            font-size: 0.9em;
+            opacity: 0.95;
+        }}
+
+        .suite-content {{
+            padding: 25px;
+            background: #fafafa;
+            max-height: 0;
+            overflow: hidden;
+            transition: max-height 0.5s ease;
+        }}
+
+        .suite-content.expanded {{
+            max-height: 2000px;
+        }}
+
+        .test-table {{
+            width: 100%;
+            border-collapse: collapse;
+            background: white;
+            border-radius: 8px;
+            overflow: hidden;
+        }}
+
+        .test-table th {{
+            background: var(--primary-color);
+            color: white;
+            padding: 12px;
+            text-align: left;
+            font-weight: 600;
+        }}
+
+        .test-table td {{
+            padding: 12px;
+            border-bottom: 1px solid #e0e0e0;
+        }}
+
+        .test-table tr:last-child td {{
+            border-bottom: none;
+        }}
+
+        .test-table tr:hover {{
+            background: #f5f5f5;
+        }}
+
+        .status {{
+            display: inline-block;
+            padding: 4px 12px;
+            border-radius: 20px;
+            font-size: 0.85em;
+            font-weight: 600;
+            text-transform: uppercase;
+        }}
+
+        .status.passed {{
+            background: var(--success-color);
+            color: white;
+        }}
+
+        .status.failed {{
+            background: var(--danger-color);
+            color: white;
+        }}
+
+        .status.skipped {{
+            background: var(--warning-color);
+            color: white;
+        }}
+
+        .progress-bar {{
+            width: 100%;
+            height: 30px;
+            background: #e0e0e0;
+            border-radius: 15px;
+            overflow: hidden;
+            margin: 20px 0;
+            box-shadow: inset 0 2px 4px rgba(0,0,0,0.1);
+        }}
+
+        .progress-fill {{
+            height: 100%;
+            display: flex;
+            transition: width 0.5s ease;
+        }}
+
+        .progress-passed {{
+            background: linear-gradient(135deg, var(--success-color), #2ecc71);
+        }}
+
+        .progress-failed {{
+            background: linear-gradient(135deg, var(--danger-color), #c0392b);
+        }}
+
+        .progress-skipped {{
+            background: linear-gradient(135deg, var(--warning-color), #e67e22);
+        }}
+
+        .graph-container {{
+            margin: 30px 0;
+            text-align: center;
+        }}
+
+        .graph-container img {{
+            max-width: 100%;
+            height: auto;
+            border-radius: 8px;
+            box-shadow: var(--card-shadow);
+        }}
+
+        .config-section {{
+            background: #f8f9fa;
+            border-left: 4px solid var(--secondary-color);
+            padding: 20px;
+            margin: 30px 0;
+            border-radius: 4px;
+        }}
+
+        .config-section h3 {{
+            color: var(--primary-color);
+            margin-bottom: 15px;
+        }}
+
+        .config-grid {{
+            display: grid;
+            grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
+            gap: 10px;
+        }}
+
+        .config-item {{
+            display: flex;
+            padding: 8px;
+            background: white;
+            border-radius: 4px;
+        }}
+
+        .config-key {{
+            font-weight: 600;
+            color: var(--primary-color);
+            margin-right: 10px;
+        }}
+
+        .config-value {{
+            color: #555;
+        }}
+
+        .footer {{
+            text-align: center;
+            padding: 20px;
+            background: var(--light-bg);
+            color: #7f8c8d;
+            border-top: 1px solid #e0e0e0;
+        }}
+
+        .toggle-icon {{
+            transition: transform 0.3s ease;
+            display: inline-block;
+        }}
+
+        .suite-header.expanded .toggle-icon {{
+            transform: rotate(90deg);
+        }}
+
+        @media (max-width: 768px) {{
+            .summary-grid {{
+                grid-template-columns: 1fr;
+            }}
+
+            .config-grid {{
+                grid-template-columns: 1fr;
+            }}
+
+            h1 {{
+                font-size: 1.8em;
+            }}
+        }}
+    </style>
+</head>
+<body>
+    <div class="container">
+        <div class="header">
+            <h1>🧪 NFS Test Results</h1>
+            <div class="subtitle">Generated on {timestamp}</div>
+        </div>
+
+        <div class="content">
+            <!-- Summary Cards -->
+            <div class="summary-grid">
+                <div class="summary-card">
+                    <div class="label">Total Tests</div>
+                    <div class="value">{total_tests}</div>
+                </div>
+                <div class="summary-card success">
+                    <div class="label">Passed</div>
+                    <div class="value">{passed_tests}</div>
+                </div>
+                <div class="summary-card danger">
+                    <div class="label">Failed</div>
+                    <div class="value">{failed_tests}</div>
+                </div>
+                <div class="summary-card">
+                    <div class="label">Pass Rate</div>
+                    <div class="value">{pass_rate:.1f}%</div>
+                </div>
+                <div class="summary-card">
+                    <div class="label">Total Time</div>
+                    <div class="value">{total_time}</div>
+                </div>
+                <div class="summary-card">
+                    <div class="label">Test Suites</div>
+                    <div class="value">{num_suites}</div>
+                </div>
+            </div>
+
+            <!-- Overall Progress Bar -->
+            <div class="progress-bar">
+                <div class="progress-fill" style="width: 100%;">
+                    <div class="progress-passed" style="width: {pass_percentage:.1f}%;"></div>
+                    <div class="progress-failed" style="width: {fail_percentage:.1f}%;"></div>
+                </div>
+            </div>
+
+            <!-- Graphs -->
+            {graphs_html}
+
+            <!-- Test Suites -->
+            <h2 style="margin: 40px 0 20px 0; color: var(--primary-color);">Test Suite Details</h2>
+            {test_suites_html}
+
+            <!-- Configuration -->
+            {config_html}
+        </div>
+
+        <div class="footer">
+            <p>Generated by kdevops NFS Test Visualization</p>
+            <p>Report generated at {timestamp}</p>
+        </div>
+    </div>
+
+    <script>
+        // Toggle test suite expansion
+        document.querySelectorAll('.suite-header').forEach(header => {{
+            header.addEventListener('click', () => {{
+                header.classList.toggle('expanded');
+                const content = header.nextElementSibling;
+                content.classList.toggle('expanded');
+            }});
+        }});
+
+        // Auto-expand suites with failures
+        document.addEventListener('DOMContentLoaded', () => {{
+            document.querySelectorAll('.suite-header[data-has-failures="true"]').forEach(header => {{
+                header.click();
+            }});
+        }});
+    </script>
+</body>
+</html>
+"""
+
+
+def format_time(seconds):
+    """Format seconds into human-readable time"""
+    if seconds < 60:
+        return f"{seconds:.1f}s"
+    elif seconds < 3600:
+        minutes = seconds / 60
+        return f"{minutes:.1f}m"
+    else:
+        hours = seconds / 3600
+        return f"{hours:.1f}h"
+
+
+def generate_suite_chart(suite_name, suite_data, output_dir):
+    """Generate a pie chart for test suite results"""
+    if not HAS_MATPLOTLIB:
+        return None
+
+    try:
+        # Count results
+        passed = sum(r["summary"]["passed"] for r in suite_data)
+        failed = sum(r["summary"]["failed"] for r in suite_data)
+
+        if passed + failed == 0:
+            return None
+
+        # Create pie chart
+        fig, ax = plt.subplots(figsize=(6, 6))
+        labels = []
+        sizes = []
+        colors = []
+
+        if passed > 0:
+            labels.append(f"Passed ({passed})")
+            sizes.append(passed)
+            colors.append("#27ae60")
+
+        if failed > 0:
+            labels.append(f"Failed ({failed})")
+            sizes.append(failed)
+            colors.append("#e74c3c")
+
+        ax.pie(
+            sizes,
+            labels=labels,
+            colors=colors,
+            autopct="%1.1f%%",
+            startangle=90,
+            textprops={"fontsize": 12},
+        )
+        ax.set_title(
+            f"{suite_name.upper()} Test Results", fontsize=14, fontweight="bold"
+        )
+
+        # Save to file
+        chart_path = os.path.join(output_dir, f"{suite_name}_pie_chart.png")
+        plt.savefig(chart_path, dpi=100, bbox_inches="tight", transparent=True)
+        plt.close()
+
+        return chart_path
+    except Exception as e:
+        print(
+            f"Warning: Could not generate chart for {suite_name}: {e}", file=sys.stderr
+        )
+        return None
+
+
+def generate_overall_chart(summary, output_dir):
+    """Generate overall test results chart"""
+    if not HAS_MATPLOTLIB:
+        return None
+
+    try:
+        # Create figure with two subplots
+        fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 6))
+
+        # Pie chart for pass/fail
+        passed = summary["total_passed"]
+        failed = summary["total_failed"]
+
+        if passed + failed > 0:
+            sizes = [passed, failed]
+            labels = [f"Passed ({passed})", f"Failed ({failed})"]
+            colors = ["#27ae60", "#e74c3c"]
+
+            ax1.pie(
+                sizes,
+                labels=labels,
+                colors=colors,
+                autopct="%1.1f%%",
+                startangle=90,
+                textprops={"fontsize": 12},
+            )
+            ax1.set_title("Overall Test Results", fontsize=14, fontweight="bold")
+
+        # Bar chart for test suites
+        if summary["test_suites_run"]:
+            suites = summary["test_suites_run"]
+            suite_counts = [len(summary.get(s, [])) for s in suites]
+
+            bars = ax2.bar(range(len(suites)), suite_counts, color="#3498db")
+            ax2.set_xlabel("Test Suite", fontsize=12)
+            ax2.set_ylabel("Number of Tests", fontsize=12)
+            ax2.set_title("Tests per Suite", fontsize=14, fontweight="bold")
+            ax2.set_xticks(range(len(suites)))
+            ax2.set_xticklabels(suites, rotation=45, ha="right")
+
+            # Add value labels on bars
+            for bar in bars:
+                height = bar.get_height()
+                ax2.text(
+                    bar.get_x() + bar.get_width() / 2.0,
+                    height,
+                    f"{int(height)}",
+                    ha="center",
+                    va="bottom",
+                )
+
+        plt.tight_layout()
+
+        # Save to file
+        chart_path = os.path.join(output_dir, "overall_results.png")
+        plt.savefig(chart_path, dpi=100, bbox_inches="tight", transparent=True)
+        plt.close()
+
+        return chart_path
+    except Exception as e:
+        print(f"Warning: Could not generate overall chart: {e}", file=sys.stderr)
+        return None
+
+
+def embed_image(image_path):
+    """Embed image as base64 data URI"""
+    if not os.path.exists(image_path):
+        return None
+
+    try:
+        with open(image_path, "rb") as f:
+            data = base64.b64encode(f.read()).decode()
+        return f"data:image/png;base64,{data}"
+    except:
+        return None
+
+
+def generate_html(results, output_dir):
+    """Generate HTML report from parsed results"""
+    summary = results["overall_summary"]
+
+    # Calculate statistics
+    total_tests = summary["total_tests"]
+    passed_tests = summary["total_passed"]
+    failed_tests = summary["total_failed"]
+    pass_rate = (passed_tests / total_tests * 100) if total_tests > 0 else 0
+    pass_percentage = pass_rate
+    fail_percentage = 100 - pass_percentage
+    total_time = format_time(summary["total_time"])
+    num_suites = len(summary["test_suites_run"])
+
+    # Generate graphs
+    graphs_html = ""
+    overall_chart = generate_overall_chart(summary, output_dir)
+    if overall_chart:
+        img_data = embed_image(overall_chart)
+        if img_data:
+            graphs_html += f"""
+            <div class="graph-container">
+                <h2 style="color: var(--primary-color); margin-bottom: 20px;">Test Results Overview</h2>
+                <img src="{img_data}" alt="Overall Results">
+            </div>
+            """
+
+    # Generate test suites HTML
+    test_suites_html = ""
+    for suite_name, suite_data in results["test_suites"].items():
+        if not suite_data:
+            continue
+
+        # Calculate suite statistics
+        suite_total = sum(r["summary"]["total"] for r in suite_data)
+        suite_passed = sum(r["summary"]["passed"] for r in suite_data)
+        suite_failed = sum(r["summary"]["failed"] for r in suite_data)
+        suite_time = sum(r["summary"]["total_time"] for r in suite_data)
+        has_failures = suite_failed > 0
+
+        # Generate suite chart
+        suite_chart = generate_suite_chart(suite_name, suite_data, output_dir)
+
+        # Build test details table
+        test_rows = ""
+        for result in suite_data:
+            for test in result["tests"]:
+                status_class = test["status"].lower()
+                test_rows += f"""
+                <tr>
+                    <td>{test['name']}</td>
+                    <td>{test['description'][:100]}...</td>
+                    <td><span class="status {status_class}">{test['status']}</span></td>
+                    <td>{test['duration']:.3f}s</td>
+                </tr>
+                """
+
+        # Build suite HTML
+        test_suites_html += f"""
+        <div class="test-suite">
+            <div class="suite-header" data-has-failures="{str(has_failures).lower()}">
+                <h2>
+                    <span><span class="toggle-icon">▶</span> {suite_name.upper()}</span>
+                    <span style="font-size: 0.7em; font-weight: normal;">
+                        {suite_passed}/{suite_total} passed
+                    </span>
+                </h2>
+                <div class="suite-stats">
+                    <span>✓ Passed: {suite_passed}</span>
+                    <span>✗ Failed: {suite_failed}</span>
+                    <span>⏱ Time: {format_time(suite_time)}</span>
+                </div>
+            </div>
+            <div class="suite-content">
+                {f'<div class="graph-container"><img src="{embed_image(suite_chart)}" alt="{suite_name} Results"></div>' if suite_chart and embed_image(suite_chart) else ''}
+                <table class="test-table">
+                    <thead>
+                        <tr>
+                            <th>Test Name</th>
+                            <th>Description</th>
+                            <th>Status</th>
+                            <th>Duration</th>
+                        </tr>
+                    </thead>
+                    <tbody>
+                        {test_rows}
+                    </tbody>
+                </table>
+            </div>
+        </div>
+        """
+
+    # Generate configuration HTML
+    config_html = ""
+    if results["test_suites"]:
+        # Get configuration from first test suite
+        for suite_data in results["test_suites"].values():
+            if suite_data and suite_data[0]["configuration"]:
+                config = suite_data[0]["configuration"]
+                config_items = ""
+                for key, value in sorted(config.items()):
+                    if key and value and value != "None":
+                        config_items += f"""
+                        <div class="config-item">
+                            <span class="config-key">{key.replace('_', ' ').title()}:</span>
+                            <span class="config-value">{value}</span>
+                        </div>
+                        """
+
+                if config_items:
+                    config_html = f"""
+                    <div class="config-section">
+                        <h3>Test Configuration</h3>
+                        <div class="config-grid">
+                            {config_items}
+                        </div>
+                    </div>
+                    """
+                break
+
+    # Generate final HTML
+    timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
+    html_content = HTML_TEMPLATE.format(
+        timestamp=timestamp,
+        total_tests=total_tests,
+        passed_tests=passed_tests,
+        failed_tests=failed_tests,
+        pass_rate=pass_rate,
+        pass_percentage=pass_percentage,
+        fail_percentage=fail_percentage,
+        total_time=total_time,
+        num_suites=num_suites,
+        graphs_html=graphs_html,
+        test_suites_html=test_suites_html,
+        config_html=config_html,
+    )
+
+    # Write HTML file
+    html_path = os.path.join(output_dir, "index.html")
+    with open(html_path, "w") as f:
+        f.write(html_content)
+
+    return html_path
+
+
+def main():
+    """Main entry point"""
+    if len(sys.argv) > 1:
+        results_dir = sys.argv[1]
+    else:
+        results_dir = "workflows/nfstest/results/last-run"
+
+    if not os.path.exists(results_dir):
+        print(
+            f"Error: Results directory '{results_dir}' does not exist", file=sys.stderr
+        )
+        sys.exit(1)
+
+    # Check for parsed results
+    parsed_file = os.path.join(results_dir, "parsed_results.json")
+    if not os.path.exists(parsed_file):
+        print(
+            f"Error: Parsed results file not found. Run parse_nfstest_results.py first.",
+            file=sys.stderr,
+        )
+        sys.exit(1)
+
+    # Load parsed results
+    with open(parsed_file, "r") as f:
+        results = json.load(f)
+
+    # Create HTML output directory - use absolute path from results_dir
+    base_dir = os.path.dirname(os.path.dirname(os.path.abspath(results_dir)))
+    html_dir = os.path.join(base_dir, "html")
+    os.makedirs(html_dir, exist_ok=True)
+
+    # Generate HTML report
+    html_path = generate_html(results, html_dir)
+
+    print(f"HTML report generated: {html_path}")
+    print(f"Directory ready for transfer: {html_dir}")
+
+
+if __name__ == "__main__":
+    main()
diff --git a/workflows/nfstest/scripts/parse_nfstest_results.py b/workflows/nfstest/scripts/parse_nfstest_results.py
new file mode 100755
index 000000000000..40d638fa3eae
--- /dev/null
+++ b/workflows/nfstest/scripts/parse_nfstest_results.py
@@ -0,0 +1,277 @@
+#!/usr/bin/env python3
+"""
+Parse NFS test results from log files and extract key metrics.
+"""
+
+import os
+import re
+import sys
+import json
+import glob
+from datetime import datetime
+from pathlib import Path
+from collections import defaultdict
+
+
+def parse_timestamp(timestamp_str):
+    """Parse timestamp from log format"""
+    try:
+        # Handle format: 17:18:41.048703
+        time_parts = timestamp_str.split(":")
+        if len(time_parts) == 3:
+            hours = int(time_parts[0])
+            minutes = int(time_parts[1])
+            seconds = float(time_parts[2])
+            return hours * 3600 + minutes * 60 + seconds
+    except:
+        pass
+    return 0
+
+
+def parse_test_log(log_path):
+    """Parse a single NFS test log file"""
+    results = {
+        "file": os.path.basename(log_path),
+        "test_suite": "",
+        "tests": [],
+        "summary": {
+            "total": 0,
+            "passed": 0,
+            "failed": 0,
+            "skipped": 0,
+            "total_time": 0,
+        },
+        "configuration": {},
+        "test_groups": defaultdict(list),
+    }
+
+    # Determine test suite from filename
+    if "interop" in log_path:
+        results["test_suite"] = "interop"
+    elif "alloc" in log_path:
+        results["test_suite"] = "alloc"
+    elif "dio" in log_path:
+        results["test_suite"] = "dio"
+    elif "lock" in log_path:
+        results["test_suite"] = "lock"
+    elif "posix" in log_path:
+        results["test_suite"] = "posix"
+    elif "sparse" in log_path:
+        results["test_suite"] = "sparse"
+    elif "ssc" in log_path:
+        results["test_suite"] = "ssc"
+
+    current_test = None
+    start_time = None
+
+    with open(log_path, "r") as f:
+        lines = f.readlines()
+
+    for i, line in enumerate(lines):
+        # Parse configuration options
+        if line.strip().startswith("OPTS:") and "--" in line:
+            opts_match = re.search(r"OPTS:.*?-\s*(.+?)(?:--|\s*$)", line)
+            if opts_match:
+                opt_str = opts_match.group(1).strip()
+                if "=" in opt_str:
+                    key = opt_str.split("=")[0].replace("-", "_")
+                    value = opt_str.split("=", 1)[1] if "=" in opt_str else "true"
+                    results["configuration"][key] = value
+
+        # Parse individual OPTS lines for configuration
+        if line.strip().startswith("OPTS:") and "=" in line and "--" not in line:
+            opts_match = re.search(r"OPTS:.*?-\s*(\w+)\s*=\s*(.+)", line)
+            if opts_match:
+                key = opts_match.group(1).replace("-", "_")
+                value = opts_match.group(2).strip()
+                results["configuration"][key] = value
+
+        # Parse test start
+        if line.startswith("*** "):
+            test_desc = line[4:].strip()
+            current_test = {
+                "name": "",
+                "description": test_desc,
+                "status": "unknown",
+                "duration": 0,
+                "errors": [],
+            }
+
+        # Parse test name
+        if "TEST: Running test" in line:
+            test_match = re.search(r"Running test '(\w+)'", line)
+            if test_match and current_test:
+                current_test["name"] = test_match.group(1)
+
+        # Parse test results
+        if line.strip().startswith("PASS:"):
+            if current_test:
+                current_test["status"] = "passed"
+                pass_msg = line.split("PASS:", 1)[1].strip()
+                if "assertions" not in current_test:
+                    current_test["assertions"] = []
+                current_test["assertions"].append(
+                    {"status": "PASS", "message": pass_msg}
+                )
+
+        if line.strip().startswith("FAIL:"):
+            if current_test:
+                current_test["status"] = "failed"
+                fail_msg = line.split("FAIL:", 1)[1].strip()
+                current_test["errors"].append(fail_msg)
+                if "assertions" not in current_test:
+                    current_test["assertions"] = []
+                current_test["assertions"].append(
+                    {"status": "FAIL", "message": fail_msg}
+                )
+
+        # Parse test timing
+        if line.strip().startswith("TIME:"):
+            time_match = re.search(r"TIME:\s*([\d.]+)([ms]?)", line)
+            if time_match and current_test:
+                duration = float(time_match.group(1))
+                unit = time_match.group(2) if time_match.group(2) else "s"
+                if unit == "m":
+                    duration *= 60
+                elif unit == "ms":
+                    duration /= 1000
+                current_test["duration"] = duration
+                results["tests"].append(current_test)
+
+                # Group tests by category (first part of test name)
+                if current_test["name"]:
+                    # Group by NFS version tested
+                    if "NFSv3" in current_test["description"]:
+                        results["test_groups"]["NFSv3"].append(current_test)
+                    if "NFSv4" in current_test["description"]:
+                        if "NFSv4.1" in current_test["description"]:
+                            results["test_groups"]["NFSv4.1"].append(current_test)
+                        else:
+                            results["test_groups"]["NFSv4.0"].append(current_test)
+
+                current_test = None
+
+        # Parse final summary
+        if "tests (" in line and "passed," in line:
+            summary_match = re.search(
+                r"(\d+)\s+tests\s*\((\d+)\s+passed,\s*(\d+)\s+failed", line
+            )
+            if summary_match:
+                results["summary"]["total"] = int(summary_match.group(1))
+                results["summary"]["passed"] = int(summary_match.group(2))
+                results["summary"]["failed"] = int(summary_match.group(3))
+
+        # Parse total time
+        if line.startswith("Total time:"):
+            time_match = re.search(r"Total time:\s*(.+)", line)
+            if time_match:
+                time_str = time_match.group(1).strip()
+                # Convert format like "2m22.099818s" to seconds
+                total_seconds = 0
+                if "m" in time_str:
+                    parts = time_str.split("m")
+                    total_seconds += int(parts[0]) * 60
+                    if len(parts) > 1:
+                        seconds_part = parts[1].replace("s", "").strip()
+                        if seconds_part:
+                            total_seconds += float(seconds_part)
+                elif "s" in time_str:
+                    total_seconds = float(time_str.replace("s", "").strip())
+                results["summary"]["total_time"] = total_seconds
+
+    return results
+
+
+def parse_all_results(results_dir):
+    """Parse all test results in a directory"""
+    all_results = {
+        "timestamp": datetime.now().isoformat(),
+        "test_suites": {},
+        "overall_summary": {
+            "total_tests": 0,
+            "total_passed": 0,
+            "total_failed": 0,
+            "total_time": 0,
+            "test_suites_run": [],
+        },
+    }
+
+    # Find all log files
+    log_pattern = os.path.join(results_dir, "**/*.log")
+    log_files = glob.glob(log_pattern, recursive=True)
+
+    for log_file in sorted(log_files):
+        # Parse the log file
+        suite_results = parse_test_log(log_file)
+
+        # Determine suite category from path
+        if "/interop/" in log_file:
+            suite_key = "interop"
+        elif "/alloc/" in log_file:
+            suite_key = "alloc"
+        elif "/dio/" in log_file:
+            suite_key = "dio"
+        elif "/lock/" in log_file:
+            suite_key = "lock"
+        elif "/posix/" in log_file:
+            suite_key = "posix"
+        elif "/sparse/" in log_file:
+            suite_key = "sparse"
+        elif "/ssc/" in log_file:
+            suite_key = "ssc"
+        else:
+            suite_key = suite_results["test_suite"] or "unknown"
+
+        # Store results
+        if suite_key not in all_results["test_suites"]:
+            all_results["test_suites"][suite_key] = []
+            all_results["overall_summary"]["test_suites_run"].append(suite_key)
+
+        all_results["test_suites"][suite_key].append(suite_results)
+
+        # Update overall summary
+        all_results["overall_summary"]["total_tests"] += suite_results["summary"][
+            "total"
+        ]
+        all_results["overall_summary"]["total_passed"] += suite_results["summary"][
+            "passed"
+        ]
+        all_results["overall_summary"]["total_failed"] += suite_results["summary"][
+            "failed"
+        ]
+        all_results["overall_summary"]["total_time"] += suite_results["summary"][
+            "total_time"
+        ]
+
+    return all_results
+
+
+def main():
+    """Main entry point"""
+    if len(sys.argv) > 1:
+        results_dir = sys.argv[1]
+    else:
+        results_dir = "workflows/nfstest/results/last-run"
+
+    if not os.path.exists(results_dir):
+        print(
+            f"Error: Results directory '{results_dir}' does not exist", file=sys.stderr
+        )
+        sys.exit(1)
+
+    # Parse all results
+    results = parse_all_results(results_dir)
+
+    # Output as JSON
+    print(json.dumps(results, indent=2))
+
+    # Save to file
+    output_file = os.path.join(results_dir, "parsed_results.json")
+    with open(output_file, "w") as f:
+        json.dump(results, f, indent=2)
+
+    print(f"\nResults saved to: {output_file}", file=sys.stderr)
+
+
+if __name__ == "__main__":
+    main()
diff --git a/workflows/nfstest/scripts/visualize_nfstest_results.sh b/workflows/nfstest/scripts/visualize_nfstest_results.sh
new file mode 100755
index 000000000000..e7ddbfa45768
--- /dev/null
+++ b/workflows/nfstest/scripts/visualize_nfstest_results.sh
@@ -0,0 +1,61 @@
+#!/bin/bash
+# Visualize NFS test results
+
+SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
+KDEVOPS_DIR="$(git rev-parse --show-toplevel 2>/dev/null || pwd)"
+RESULTS_DIR="${1:-$KDEVOPS_DIR/workflows/nfstest/results/last-run}"
+HTML_OUTPUT_DIR="$KDEVOPS_DIR/workflows/nfstest/results/html"
+
+# Check if results directory exists
+if [ ! -d "$RESULTS_DIR" ]; then
+    echo "Error: Results directory '$RESULTS_DIR' does not exist"
+    echo "Please run 'make nfstest-baseline' or 'make nfstest-dev' first to generate test results"
+    exit 1
+fi
+
+# Check if there are any log files
+LOG_COUNT=$(find "$RESULTS_DIR" -name "*.log" 2>/dev/null | wc -l)
+if [ "$LOG_COUNT" -eq 0 ]; then
+    echo "Error: No test log files found in '$RESULTS_DIR'"
+    echo "Please run NFS tests first to generate results"
+    exit 1
+fi
+
+echo "Processing NFS test results from: $RESULTS_DIR"
+
+# Parse the results
+echo "Step 1: Parsing test results..."
+python3 "$SCRIPT_DIR/parse_nfstest_results.py" "$RESULTS_DIR"
+if [ $? -ne 0 ]; then
+    echo "Error: Failed to parse test results"
+    exit 1
+fi
+
+# Generate HTML visualization
+echo "Step 2: Generating HTML visualization..."
+python3 "$SCRIPT_DIR/generate_nfstest_html.py" "$RESULTS_DIR"
+if [ $? -ne 0 ]; then
+    echo "Warning: HTML generation completed with warnings"
+fi
+
+# Check if HTML was generated
+if [ -f "$HTML_OUTPUT_DIR/index.html" ]; then
+    echo ""
+    echo "✓ Visualization complete!"
+    echo ""
+    echo "Results available in: $HTML_OUTPUT_DIR/"
+    echo ""
+    echo "To view locally:"
+    echo "  open $HTML_OUTPUT_DIR/index.html"
+    echo ""
+    echo "To copy to remote system:"
+    echo "  scp -r $HTML_OUTPUT_DIR/ user@remote:/path/to/destination/"
+    echo ""
+
+    # List generated files
+    echo "Generated files:"
+    ls -lh "$HTML_OUTPUT_DIR/"
+else
+    echo "Error: HTML generation failed - no index.html created"
+    exit 1
+fi
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 7/8] fstests: add soak duration to nfs template
  2025-10-03 20:19 [PATCH v2 0/8] nfs: few fixes and enhancements Chuck Lever
                   ` (5 preceding siblings ...)
  2025-10-03 20:19 ` [PATCH v2 6/8] nfstest: add results visualization support Chuck Lever
@ 2025-10-03 20:19 ` Chuck Lever
  2025-10-03 20:19 ` [PATCH v2 8/8] pynfs: add visualization support for test results Chuck Lever
  2025-10-03 22:57 ` [PATCH v2 0/8] nfs: few fixes and enhancements Luis Chamberlain
  8 siblings, 0 replies; 10+ messages in thread
From: Chuck Lever @ 2025-10-03 20:19 UTC (permalink / raw)
  To: kdevops; +Cc: Luis Chamberlain

From: Luis Chamberlain <mcgrof@kernel.org>

Use the soak duration when enabled for NFS fstests.
Although current analysis (take scripts/workflows/lib/fstests.py
fstests_test_uses_soak_duration() and use it on a new script to
check.time) shows no tests run on NFS actually leverage soak duration.
But this can change with time.

Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 playbooks/roles/fstests/templates/nfs/nfs.config | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/playbooks/roles/fstests/templates/nfs/nfs.config b/playbooks/roles/fstests/templates/nfs/nfs.config
index b26c45c11bc3..5d0b98cebd3e 100644
--- a/playbooks/roles/fstests/templates/nfs/nfs.config
+++ b/playbooks/roles/fstests/templates/nfs/nfs.config
@@ -10,6 +10,10 @@ SCRATCH_DEV="{{ fstests_nfs_scratch_devpool }}"
 RESULT_BASE=$PWD/results/$HOST/$(uname -r)
 TEST_DEV={{ fstests_nfs_test_dev }}
 CANON_DEVS=yes
+{% if fstests_soak_duration > 0 -%}
+SOAK_DURATION={{ fstests_soak_duration }}
+{% endif %}
+
 {% if fstests_nfs_section_pnfs -%}
 
 # Test pNFS block
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 8/8] pynfs: add visualization support for test results
  2025-10-03 20:19 [PATCH v2 0/8] nfs: few fixes and enhancements Chuck Lever
                   ` (6 preceding siblings ...)
  2025-10-03 20:19 ` [PATCH v2 7/8] fstests: add soak duration to nfs template Chuck Lever
@ 2025-10-03 20:19 ` Chuck Lever
  2025-10-03 22:57 ` [PATCH v2 0/8] nfs: few fixes and enhancements Luis Chamberlain
  8 siblings, 0 replies; 10+ messages in thread
From: Chuck Lever @ 2025-10-03 20:19 UTC (permalink / raw)
  To: kdevops; +Cc: Luis Chamberlain

From: Luis Chamberlain <mcgrof@kernel.org>

Add 'make pynfs-visualize' target to generate comprehensive HTML reports
with PNG charts for pynfs test results. This makes it easy to understand
test outcomes at a glance and share results.

Features:
- HTML report with test summaries, statistics, and detailed results
- PNG charts showing test distribution (pie and bar charts)
- Comparison charts when multiple NFS versions are tested
- Automatic kernel version detection from results directory
- Self-contained output directory for easy transfer via scp

The visualization script generates:
- index.html: Main report with interactive tabs
- pynfs-v4_0-results.png: NFS v4.0 test charts
- pynfs-v4_1-results.png: NFS v4.1 test charts
- pynfs-vblock-results.png: pNFS block layout charts
- pynfs-comparison.png: Side-by-side version comparison

Usage:
  make pynfs-visualize                    # Auto-detect kernel
  make pynfs-visualize LAST_KERNEL=<version>  # Specific kernel

Output is generated in:
  workflows/pynfs/results/<kernel>/html/

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 scripts/workflows/pynfs/visualize_results.py | 1014 ++++++++++++++++++
 workflows/pynfs/Makefile                     |   17 +-
 2 files changed, 1030 insertions(+), 1 deletion(-)
 create mode 100755 scripts/workflows/pynfs/visualize_results.py

diff --git a/scripts/workflows/pynfs/visualize_results.py b/scripts/workflows/pynfs/visualize_results.py
new file mode 100755
index 000000000000..15b0089bd379
--- /dev/null
+++ b/scripts/workflows/pynfs/visualize_results.py
@@ -0,0 +1,1014 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: GPL-2.0
+"""
+Generate HTML visualization report for pynfs test results with charts and summaries.
+Creates both an HTML report and PNG chart files.
+"""
+
+import json
+import os
+import sys
+import argparse
+from pathlib import Path
+from datetime import datetime
+import re
+
+# Try to import matplotlib for PNG generation
+try:
+    import matplotlib
+
+    matplotlib.use("Agg")  # Use non-interactive backend
+    import matplotlib.pyplot as plt
+    import matplotlib.patches as mpatches
+
+    MATPLOTLIB_AVAILABLE = True
+except ImportError:
+    MATPLOTLIB_AVAILABLE = False
+    print("Warning: matplotlib not available, PNG charts will not be generated")
+    print("Install with: pip3 install matplotlib")
+
+
+def load_json_results(filepath):
+    """Load and parse a JSON result file."""
+    try:
+        with open(filepath, "r") as f:
+            return json.load(f)
+    except Exception as e:
+        print(f"Error loading {filepath}: {e}")
+        return None
+
+
+def categorize_tests(testcases):
+    """Categorize tests by their class/module."""
+    categories = {}
+    for test in testcases:
+        classname = test.get("classname", "unknown")
+        if classname not in categories:
+            categories[classname] = {
+                "passed": [],
+                "failed": [],
+                "skipped": [],
+                "error": [],
+            }
+
+        if test.get("skipped"):
+            categories[classname]["skipped"].append(test)
+        elif test.get("failure"):
+            categories[classname]["failed"].append(test)
+        elif test.get("error"):
+            categories[classname]["error"].append(test)
+        else:
+            categories[classname]["passed"].append(test)
+
+    return categories
+
+
+def generate_png_charts(charts, output_dir):
+    """Generate PNG charts using matplotlib."""
+    if not MATPLOTLIB_AVAILABLE:
+        return []
+
+    png_files = []
+
+    # Set up the style
+    plt.style.use("seaborn-v0_8-darkgrid")
+
+    for chart in charts:
+        version = chart["version"]
+
+        # Create figure with subplots
+        fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 6))
+        fig.suptitle(
+            f'PyNFS {version.upper()} Test Results - Kernel {chart.get("kernel", "Unknown")}',
+            fontsize=16,
+            fontweight="bold",
+        )
+
+        # Pie chart
+        sizes = [chart["passed"], chart["failed"], chart["errors"], chart["skipped"]]
+        labels = ["Passed", "Failed", "Errors", "Skipped"]
+        colors = ["#48bb78", "#f56565", "#ed8936", "#a0aec0"]
+        explode = (0.05, 0.1, 0.1, 0)  # Explode failed and error slices
+
+        # Only show non-zero values
+        non_zero_sizes = []
+        non_zero_labels = []
+        non_zero_colors = []
+        non_zero_explode = []
+        for i, size in enumerate(sizes):
+            if size > 0:
+                non_zero_sizes.append(size)
+                non_zero_labels.append(f"{labels[i]}: {size}")
+                non_zero_colors.append(colors[i])
+                non_zero_explode.append(explode[i])
+
+        ax1.pie(
+            non_zero_sizes,
+            explode=non_zero_explode,
+            labels=non_zero_labels,
+            colors=non_zero_colors,
+            autopct="%1.1f%%",
+            startangle=90,
+            shadow=True,
+        )
+        ax1.set_title("Test Distribution")
+
+        # Bar chart
+        ax2.bar(labels, sizes, color=colors, edgecolor="black", linewidth=1.5)
+        ax2.set_ylabel("Number of Tests", fontweight="bold")
+        ax2.set_title("Test Counts")
+        ax2.grid(axis="y", alpha=0.3)
+
+        # Add text annotations on bars
+        for i, (label, value) in enumerate(zip(labels, sizes)):
+            ax2.text(
+                i,
+                value + max(sizes) * 0.01,
+                str(value),
+                ha="center",
+                va="bottom",
+                fontweight="bold",
+            )
+
+        # Add summary statistics
+        total = chart["total"]
+        pass_rate = chart["pass_rate"]
+        fig.text(
+            0.5,
+            0.02,
+            f"Total Tests: {total} | Pass Rate: {pass_rate}%",
+            ha="center",
+            fontsize=12,
+            fontweight="bold",
+            bbox=dict(boxstyle="round", facecolor="wheat", alpha=0.5),
+        )
+
+        plt.tight_layout()
+
+        # Save the figure
+        png_filename = f'pynfs-{version.replace(".", "_")}-results.png'
+        png_path = output_dir / png_filename
+        plt.savefig(png_path, dpi=150, bbox_inches="tight")
+        plt.close()
+
+        png_files.append(png_filename)
+        print(f"  Generated: {png_path}")
+
+    # Generate a summary chart comparing all versions
+    if len(charts) > 1:
+        fig, axes = plt.subplots(2, 2, figsize=(14, 10))
+        fig.suptitle("PyNFS Test Results Comparison", fontsize=18, fontweight="bold")
+
+        # Prepare data
+        versions = [c["version"].upper() for c in charts]
+        passed = [c["passed"] for c in charts]
+        failed = [c["failed"] for c in charts]
+        errors = [c["errors"] for c in charts]
+        skipped = [c["skipped"] for c in charts]
+        pass_rates = [c["pass_rate"] for c in charts]
+
+        x = range(len(versions))
+        width = 0.2
+
+        # Grouped bar chart
+        ax = axes[0, 0]
+        ax.bar(
+            [i - width * 1.5 for i in x], passed, width, label="Passed", color="#48bb78"
+        )
+        ax.bar(
+            [i - width * 0.5 for i in x], failed, width, label="Failed", color="#f56565"
+        )
+        ax.bar(
+            [i + width * 0.5 for i in x], errors, width, label="Errors", color="#ed8936"
+        )
+        ax.bar(
+            [i + width * 1.5 for i in x],
+            skipped,
+            width,
+            label="Skipped",
+            color="#a0aec0",
+        )
+        ax.set_xlabel("Version")
+        ax.set_ylabel("Number of Tests")
+        ax.set_title("Test Results by Version")
+        ax.set_xticks(x)
+        ax.set_xticklabels(versions)
+        ax.legend()
+        ax.grid(axis="y", alpha=0.3)
+
+        # Pass rate comparison
+        ax = axes[0, 1]
+        bars = ax.bar(
+            versions,
+            pass_rates,
+            color=[
+                "#48bb78" if p >= 90 else "#ed8936" if p >= 70 else "#f56565"
+                for p in pass_rates
+            ],
+        )
+        ax.set_ylabel("Pass Rate (%)")
+        ax.set_title("Pass Rate Comparison")
+        ax.set_ylim(0, 105)
+        ax.grid(axis="y", alpha=0.3)
+
+        # Add value labels on bars
+        for bar, rate in zip(bars, pass_rates):
+            height = bar.get_height()
+            ax.text(
+                bar.get_x() + bar.get_width() / 2.0,
+                height + 1,
+                f"{rate:.1f}%",
+                ha="center",
+                va="bottom",
+                fontweight="bold",
+            )
+
+        # Stacked bar chart
+        ax = axes[1, 0]
+        ax.bar(versions, passed, label="Passed", color="#48bb78")
+        ax.bar(versions, failed, bottom=passed, label="Failed", color="#f56565")
+        ax.bar(
+            versions,
+            errors,
+            bottom=[p + f for p, f in zip(passed, failed)],
+            label="Errors",
+            color="#ed8936",
+        )
+        ax.bar(
+            versions,
+            skipped,
+            bottom=[p + f + e for p, f, e in zip(passed, failed, errors)],
+            label="Skipped",
+            color="#a0aec0",
+        )
+        ax.set_ylabel("Number of Tests")
+        ax.set_title("Stacked Test Results")
+        ax.legend()
+        ax.grid(axis="y", alpha=0.3)
+
+        # Summary table
+        ax = axes[1, 1]
+        ax.axis("tight")
+        ax.axis("off")
+
+        table_data = [
+            ["Version", "Total", "Passed", "Failed", "Errors", "Skipped", "Pass Rate"]
+        ]
+        for c in charts:
+            table_data.append(
+                [
+                    c["version"].upper(),
+                    str(c["total"]),
+                    str(c["passed"]),
+                    str(c["failed"]),
+                    str(c["errors"]),
+                    str(c["skipped"]),
+                    f"{c['pass_rate']}%",
+                ]
+            )
+
+        table = ax.table(cellText=table_data, loc="center", cellLoc="center")
+        table.auto_set_font_size(False)
+        table.set_fontsize(10)
+        table.scale(1.2, 1.5)
+
+        # Style the header row
+        for i in range(7):
+            table[(0, i)].set_facecolor("#4a5568")
+            table[(0, i)].set_text_props(weight="bold", color="white")
+
+        # Color code the cells
+        for i in range(1, len(table_data)):
+            # Pass rate column
+            pass_rate = float(table_data[i][6].strip("%"))
+            if pass_rate >= 90:
+                table[(i, 6)].set_facecolor("#c6f6d5")
+            elif pass_rate >= 70:
+                table[(i, 6)].set_facecolor("#feebc8")
+            else:
+                table[(i, 6)].set_facecolor("#fed7d7")
+
+        plt.tight_layout()
+
+        # Save comparison chart
+        comparison_path = output_dir / "pynfs-comparison.png"
+        plt.savefig(comparison_path, dpi=150, bbox_inches="tight")
+        plt.close()
+
+        png_files.append("pynfs-comparison.png")
+        print(f"  Generated: {comparison_path}")
+
+    return png_files
+
+
+def generate_chart_data(results, kernel_version):
+    """Generate data for charts."""
+    charts = []
+    for version, data in results.items():
+        if not data:
+            continue
+
+        total = data.get("tests", 0)
+        passed = (
+            total
+            - data.get("failures", 0)
+            - data.get("errors", 0)
+            - data.get("skipped", 0)
+        )
+        failed = data.get("failures", 0)
+        errors = data.get("errors", 0)
+        skipped = data.get("skipped", 0)
+
+        charts.append(
+            {
+                "version": version,
+                "kernel": kernel_version,
+                "total": total,
+                "passed": passed,
+                "failed": failed,
+                "errors": errors,
+                "skipped": skipped,
+                "pass_rate": round((passed / total * 100) if total > 0 else 0, 2),
+            }
+        )
+
+    return charts
+
+
+def generate_html_report(results_dir, kernel_version):
+    """Generate the main HTML report with embedded charts and links to PNG files."""
+    results = {}
+
+    # Load all JSON files for this kernel version
+    for json_file in Path(results_dir).glob(f"{kernel_version}*.json"):
+        # Extract version from filename (e.g., v4.0, v4.1, vblock)
+        match = re.search(r"-v(4\.[01]|block)\.json$", str(json_file))
+        if match:
+            version = "v" + match.group(1)
+            results[version] = load_json_results(json_file)
+
+    if not results:
+        print(f"No results found for kernel {kernel_version}")
+        return None, []
+
+    # Generate chart data
+    charts = generate_chart_data(results, kernel_version)
+
+    # Create output directory for HTML and PNGs
+    output_dir = Path(results_dir) / "html"
+    output_dir.mkdir(parents=True, exist_ok=True)
+
+    # Generate PNG charts
+    png_files = generate_png_charts(charts, output_dir)
+
+    # Generate detailed test results
+    detailed_results = {}
+    for version, data in results.items():
+        if data and "testcase" in data:
+            detailed_results[version] = categorize_tests(data["testcase"])
+
+    # Create HTML content
+    html_content = f"""<!DOCTYPE html>
+<html lang="en">
+<head>
+    <meta charset="UTF-8">
+    <meta name="viewport" content="width=device-width, initial-scale=1.0">
+    <title>PyNFS Test Results - {kernel_version}</title>
+    <script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
+    <style>
+        * {{
+            margin: 0;
+            padding: 0;
+            box-sizing: border-box;
+        }}
+
+        body {{
+            font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, sans-serif;
+            background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
+            min-height: 100vh;
+            padding: 20px;
+        }}
+
+        .container {{
+            max-width: 1400px;
+            margin: 0 auto;
+        }}
+
+        .header {{
+            background: white;
+            border-radius: 15px;
+            padding: 30px;
+            margin-bottom: 30px;
+            box-shadow: 0 10px 30px rgba(0,0,0,0.1);
+        }}
+
+        h1 {{
+            color: #2d3748;
+            font-size: 2.5em;
+            margin-bottom: 10px;
+        }}
+
+        .subtitle {{
+            color: #718096;
+            font-size: 1.1em;
+        }}
+
+        .png-links {{
+            margin-top: 20px;
+            padding: 15px;
+            background: #f7fafc;
+            border-radius: 8px;
+        }}
+
+        .png-links h3 {{
+            color: #2d3748;
+            margin-bottom: 10px;
+        }}
+
+        .png-links a {{
+            color: #667eea;
+            text-decoration: none;
+            margin-right: 15px;
+            font-weight: 500;
+        }}
+
+        .png-links a:hover {{
+            text-decoration: underline;
+        }}
+
+        .summary-grid {{
+            display: grid;
+            grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
+            gap: 20px;
+            margin-bottom: 30px;
+        }}
+
+        .summary-card {{
+            background: white;
+            border-radius: 15px;
+            padding: 25px;
+            box-shadow: 0 10px 30px rgba(0,0,0,0.1);
+            transition: transform 0.3s ease;
+        }}
+
+        .summary-card:hover {{
+            transform: translateY(-5px);
+        }}
+
+        .card-title {{
+            font-size: 1.3em;
+            color: #4a5568;
+            margin-bottom: 20px;
+            font-weight: 600;
+        }}
+
+        .stats-grid {{
+            display: grid;
+            grid-template-columns: repeat(2, 1fr);
+            gap: 15px;
+        }}
+
+        .stat-item {{
+            padding: 10px;
+            background: #f7fafc;
+            border-radius: 8px;
+        }}
+
+        .stat-label {{
+            color: #718096;
+            font-size: 0.9em;
+            margin-bottom: 5px;
+        }}
+
+        .stat-value {{
+            font-size: 1.5em;
+            font-weight: bold;
+        }}
+
+        .stat-value.passed {{
+            color: #48bb78;
+        }}
+
+        .stat-value.failed {{
+            color: #f56565;
+        }}
+
+        .stat-value.error {{
+            color: #ed8936;
+        }}
+
+        .stat-value.skipped {{
+            color: #a0aec0;
+        }}
+
+        .chart-container {{
+            position: relative;
+            height: 300px;
+            margin-top: 20px;
+        }}
+
+        .png-preview {{
+            margin-top: 20px;
+            text-align: center;
+        }}
+
+        .png-preview img {{
+            max-width: 100%;
+            border-radius: 8px;
+            box-shadow: 0 4px 6px rgba(0,0,0,0.1);
+        }}
+
+        .details-section {{
+            background: white;
+            border-radius: 15px;
+            padding: 30px;
+            margin-bottom: 30px;
+            box-shadow: 0 10px 30px rgba(0,0,0,0.1);
+        }}
+
+        .test-category {{
+            margin-bottom: 25px;
+            padding: 20px;
+            background: #f8f9fa;
+            border-radius: 10px;
+        }}
+
+        .category-header {{
+            font-size: 1.2em;
+            color: #2d3748;
+            margin-bottom: 15px;
+            font-weight: 600;
+            border-bottom: 2px solid #e2e8f0;
+            padding-bottom: 10px;
+        }}
+
+        .test-list {{
+            display: grid;
+            gap: 10px;
+        }}
+
+        .test-item {{
+            padding: 12px;
+            border-radius: 6px;
+            display: flex;
+            justify-content: space-between;
+            align-items: center;
+            transition: all 0.2s ease;
+        }}
+
+        .test-item:hover {{
+            transform: translateX(5px);
+        }}
+
+        .test-item.passed {{
+            background: #c6f6d5;
+            border-left: 4px solid #48bb78;
+        }}
+
+        .test-item.failed {{
+            background: #fed7d7;
+            border-left: 4px solid #f56565;
+        }}
+
+        .test-item.skipped {{
+            background: #e2e8f0;
+            border-left: 4px solid #a0aec0;
+        }}
+
+        .test-item.error {{
+            background: #feebc8;
+            border-left: 4px solid #ed8936;
+        }}
+
+        .test-name {{
+            font-family: 'Courier New', monospace;
+            font-size: 0.95em;
+        }}
+
+        .test-code {{
+            background: rgba(0,0,0,0.1);
+            padding: 2px 8px;
+            border-radius: 4px;
+            font-size: 0.85em;
+            font-family: monospace;
+        }}
+
+        .tabs {{
+            display: flex;
+            gap: 10px;
+            margin-bottom: 20px;
+            border-bottom: 2px solid #e2e8f0;
+        }}
+
+        .tab-button {{
+            padding: 10px 20px;
+            background: none;
+            border: none;
+            color: #718096;
+            cursor: pointer;
+            font-size: 1em;
+            transition: all 0.3s ease;
+            position: relative;
+        }}
+
+        .tab-button:hover {{
+            color: #2d3748;
+        }}
+
+        .tab-button.active {{
+            color: #667eea;
+            font-weight: 600;
+        }}
+
+        .tab-button.active::after {{
+            content: '';
+            position: absolute;
+            bottom: -2px;
+            left: 0;
+            right: 0;
+            height: 2px;
+            background: #667eea;
+        }}
+
+        .tab-content {{
+            display: none;
+        }}
+
+        .tab-content.active {{
+            display: block;
+        }}
+
+        .footer {{
+            text-align: center;
+            color: white;
+            margin-top: 40px;
+            font-size: 0.9em;
+        }}
+
+        .progress-bar {{
+            height: 20px;
+            background: #e2e8f0;
+            border-radius: 10px;
+            overflow: hidden;
+            margin: 15px 0;
+        }}
+
+        .progress-fill {{
+            height: 100%;
+            background: linear-gradient(90deg, #48bb78, #38a169);
+            transition: width 0.5s ease;
+        }}
+    </style>
+</head>
+<body>
+    <div class="container">
+        <div class="header">
+            <h1>🧪 PyNFS Test Results</h1>
+            <div class="subtitle">Kernel Version: {kernel_version}</div>
+            <div class="subtitle">Generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}</div>
+"""
+
+    # Add PNG download links if any were generated
+    if png_files:
+        html_content += """
+            <div class="png-links">
+                <h3>📊 Download Charts:</h3>
+"""
+        for png_file in png_files:
+            html_content += (
+                f'                <a href="{png_file}" download>{png_file}</a>\n'
+            )
+        html_content += """            </div>
+"""
+
+    html_content += """
+        </div>
+
+        <div class="summary-grid">
+"""
+
+    # Add summary cards for each version
+    for chart in charts:
+        version = chart["version"]
+        png_file = f'pynfs-{version.replace(".", "_")}-results.png'
+
+        html_content += f"""
+            <div class="summary-card">
+                <div class="card-title">NFS {version.upper()} Results</div>
+                <div class="stats-grid">
+                    <div class="stat-item">
+                        <div class="stat-label">Total Tests</div>
+                        <div class="stat-value">{chart['total']}</div>
+                    </div>
+                    <div class="stat-item">
+                        <div class="stat-label">Pass Rate</div>
+                        <div class="stat-value passed">{chart['pass_rate']}%</div>
+                    </div>
+                    <div class="stat-item">
+                        <div class="stat-label">Passed</div>
+                        <div class="stat-value passed">{chart['passed']}</div>
+                    </div>
+                    <div class="stat-item">
+                        <div class="stat-label">Failed</div>
+                        <div class="stat-value failed">{chart['failed']}</div>
+                    </div>
+                    <div class="stat-item">
+                        <div class="stat-label">Errors</div>
+                        <div class="stat-value error">{chart['errors']}</div>
+                    </div>
+                    <div class="stat-item">
+                        <div class="stat-label">Skipped</div>
+                        <div class="stat-value skipped">{chart['skipped']}</div>
+                    </div>
+                </div>
+                <div class="progress-bar">
+                    <div class="progress-fill" style="width: {chart['pass_rate']}%"></div>
+                </div>
+"""
+
+        # Add PNG preview if available
+        if png_file in png_files:
+            html_content += f"""
+                <div class="png-preview">
+                    <a href="{png_file}" target="_blank">
+                        <img src="{png_file}" alt="{version.upper()} Results Chart">
+                    </a>
+                </div>
+"""
+        else:
+            # Fallback to JavaScript chart
+            html_content += f"""
+                <div class="chart-container">
+                    <canvas id="chart-{version.replace('.', '')}"></canvas>
+                </div>
+"""
+
+        html_content += """
+            </div>
+"""
+
+    html_content += """
+        </div>
+"""
+
+    # Add comparison chart preview if available
+    if "pynfs-comparison.png" in png_files:
+        html_content += """
+        <div class="details-section">
+            <h2 style="margin-bottom: 20px; color: #2d3748;">Test Results Comparison</h2>
+            <div class="png-preview">
+                <a href="pynfs-comparison.png" target="_blank">
+                    <img src="pynfs-comparison.png" alt="PyNFS Comparison Chart">
+                </a>
+            </div>
+        </div>
+"""
+
+    html_content += """
+        <div class="details-section">
+            <h2 style="margin-bottom: 20px; color: #2d3748;">Detailed Test Results</h2>
+            <div class="tabs">
+"""
+
+    # Add tabs for each version
+    first = True
+    for version in detailed_results.keys():
+        active = "active" if first else ""
+        html_content += f"""
+                <button class="tab-button {active}" onclick="showTab('{version}')">{version.upper()}</button>
+"""
+        first = False
+
+    html_content += """
+            </div>
+"""
+
+    # Add tab content for each version
+    first = True
+    for version, categories in detailed_results.items():
+        active = "active" if first else ""
+        html_content += f"""
+            <div id="tab-{version}" class="tab-content {active}">
+"""
+
+        # Sort categories by name
+        for category_name in sorted(categories.keys()):
+            category = categories[category_name]
+            total_in_category = (
+                len(category["passed"])
+                + len(category["failed"])
+                + len(category["skipped"])
+                + len(category["error"])
+            )
+
+            if total_in_category == 0:
+                continue
+
+            html_content += f"""
+                <div class="test-category">
+                    <div class="category-header">
+                        {category_name} ({total_in_category} tests)
+                    </div>
+                    <div class="test-list">
+"""
+
+            # Add passed tests
+            for test in sorted(category["passed"], key=lambda x: x.get("name", "")):
+                html_content += f"""
+                        <div class="test-item passed">
+                            <span class="test-name">{test.get('name', 'Unknown')}</span>
+                            <span class="test-code">{test.get('code', '')}</span>
+                        </div>
+"""
+
+            # Add failed tests
+            for test in sorted(category["failed"], key=lambda x: x.get("name", "")):
+                html_content += f"""
+                        <div class="test-item failed">
+                            <span class="test-name">{test.get('name', 'Unknown')}</span>
+                            <span class="test-code">{test.get('code', '')}</span>
+                        </div>
+"""
+
+            # Add error tests
+            for test in sorted(category["error"], key=lambda x: x.get("name", "")):
+                html_content += f"""
+                        <div class="test-item error">
+                            <span class="test-name">{test.get('name', 'Unknown')}</span>
+                            <span class="test-code">{test.get('code', '')}</span>
+                        </div>
+"""
+
+            # Add skipped tests (collapsed by default)
+            if category["skipped"]:
+                html_content += f"""
+                        <details>
+                            <summary style="cursor: pointer; padding: 10px; background: #f0f0f0; border-radius: 5px; margin-top: 10px;">
+                                Skipped Tests ({len(category['skipped'])})
+                            </summary>
+                            <div style="margin-top: 10px;">
+"""
+                for test in sorted(
+                    category["skipped"], key=lambda x: x.get("name", "")
+                ):
+                    html_content += f"""
+                                <div class="test-item skipped">
+                                    <span class="test-name">{test.get('name', 'Unknown')}</span>
+                                    <span class="test-code">{test.get('code', '')}</span>
+                                </div>
+"""
+                html_content += """
+                            </div>
+                        </details>
+"""
+
+            html_content += """
+                    </div>
+                </div>
+"""
+
+        html_content += """
+            </div>
+"""
+        first = False
+
+    html_content += """
+        </div>
+
+        <div class="footer">
+            Generated by kdevops pynfs-visualize | 🤖 Generated with Claude Code
+        </div>
+    </div>
+
+    <script>
+        // Tab switching function
+        function showTab(version) {
+            // Hide all tabs
+            document.querySelectorAll('.tab-content').forEach(tab => {
+                tab.classList.remove('active');
+            });
+            document.querySelectorAll('.tab-button').forEach(button => {
+                button.classList.remove('active');
+            });
+
+            // Show selected tab
+            document.getElementById('tab-' + version).classList.add('active');
+            event.target.classList.add('active');
+        }
+"""
+
+    # Add fallback JavaScript charts if matplotlib is not available
+    if not MATPLOTLIB_AVAILABLE:
+        html_content += """
+        // Chart initialization (fallback when PNGs are not available)
+"""
+        for chart in charts:
+            version = chart["version"]
+            canvas_id = f"chart-{version.replace('.', '')}"
+
+            html_content += f"""
+        if (document.getElementById('{canvas_id}')) {{
+            new Chart(document.getElementById('{canvas_id}'), {{
+                type: 'doughnut',
+                data: {{
+                    labels: ['Passed', 'Failed', 'Errors', 'Skipped'],
+                    datasets: [{{
+                        data: [{chart['passed']}, {chart['failed']}, {chart['errors']}, {chart['skipped']}],
+                        backgroundColor: [
+                            '#48bb78',
+                            '#f56565',
+                            '#ed8936',
+                            '#a0aec0'
+                        ],
+                        borderWidth: 0
+                    }}]
+                }},
+                options: {{
+                    responsive: true,
+                    maintainAspectRatio: false,
+                    plugins: {{
+                        legend: {{
+                            position: 'bottom',
+                            labels: {{
+                                padding: 15,
+                                font: {{
+                                    size: 12
+                                }}
+                            }}
+                        }},
+                        tooltip: {{
+                            callbacks: {{
+                                label: function(context) {{
+                                    let label = context.label || '';
+                                    if (label) {{
+                                        label += ': ';
+                                    }}
+                                    label += context.parsed;
+                                    let total = context.dataset.data.reduce((a, b) => a + b, 0);
+                                    let percentage = ((context.parsed / total) * 100).toFixed(1);
+                                    label += ' (' + percentage + '%)';
+                                    return label;
+                                }}
+                            }}
+                        }}
+                    }}
+                }}
+            }});
+        }}
+"""
+
+    html_content += """
+    </script>
+</body>
+</html>
+"""
+
+    return html_content, png_files
+
+
+def main():
+    parser = argparse.ArgumentParser(
+        description="Generate HTML visualization for pynfs results"
+    )
+    parser.add_argument("results_dir", help="Path to results directory")
+    parser.add_argument("kernel_version", help="Kernel version string")
+    parser.add_argument("--output", "-o", help="Output HTML file path")
+
+    args = parser.parse_args()
+
+    # Generate the HTML report and PNG charts
+    html_content, png_files = generate_html_report(
+        args.results_dir, args.kernel_version
+    )
+
+    if not html_content:
+        sys.exit(1)
+
+    # Determine output path
+    if args.output:
+        output_path = Path(args.output)
+        output_path.parent.mkdir(parents=True, exist_ok=True)
+    else:
+        output_dir = Path(args.results_dir) / "html"
+        output_dir.mkdir(parents=True, exist_ok=True)
+        output_path = output_dir / "index.html"
+
+    # Write the HTML file
+    with open(output_path, "w") as f:
+        f.write(html_content)
+
+    print(f"✅ HTML report generated: {output_path}")
+
+    if png_files:
+        print(f"✅ Generated {len(png_files)} PNG charts in: {output_path.parent}")
+    elif MATPLOTLIB_AVAILABLE:
+        print("⚠️  No PNG charts generated (no data)")
+    else:
+        print("⚠️  PNG charts not generated (matplotlib not installed)")
+        print("   Install with: pip3 install matplotlib")
+
+    return 0
+
+
+if __name__ == "__main__":
+    sys.exit(main())
diff --git a/workflows/pynfs/Makefile b/workflows/pynfs/Makefile
index e0da0cf5a043..29b0feaccbf7 100644
--- a/workflows/pynfs/Makefile
+++ b/workflows/pynfs/Makefile
@@ -13,7 +13,7 @@ WORKFLOW_ARGS += $(PYNFS_ARGS)
 
 
 ifndef LAST_KERNEL
-LAST_KERNEL := $(shell cat workflows/pynfs/results/last-kernel.txt 2>/dev/null)
+LAST_KERNEL := $(shell cat workflows/pynfs/results/last-kernel.txt 2>/dev/null || ls -1dt workflows/pynfs/results/*/ 2>/dev/null | grep -v "last-run" | head -1 | xargs -r basename)
 endif
 
 ifeq ($(LAST_KERNEL), $(shell cat workflows/pynfs/results/last-kernel.txt 2>/dev/null))
@@ -76,10 +76,25 @@ pynfs-show-results:
 		| xargs $(XARGS_ARGS) \
 		| sed '$${/^$$/d;}'
 
+pynfs-visualize:
+	$(Q)if [ ! -d "workflows/pynfs/results/$(LAST_KERNEL)" ]; then \
+		echo "Error: No results found for kernel $(LAST_KERNEL)"; \
+		echo "Available kernels:"; \
+		ls -1 workflows/pynfs/results/ | grep -v last; \
+		exit 1; \
+	fi
+	$(Q)echo "Generating HTML visualization for kernel $(LAST_KERNEL)..."
+	$(Q)python3 scripts/workflows/pynfs/visualize_results.py \
+		workflows/pynfs/results/$(LAST_KERNEL) \
+		$(LAST_KERNEL) \
+		--output workflows/pynfs/results/$(LAST_KERNEL)/html/index.html
+	$(Q)echo "✅ Visualization complete: workflows/pynfs/results/$(LAST_KERNEL)/html/index.html"
+
 pynfs-help-menu:
 	@echo "pynfs options:"
 	@echo "pynfs                             - Git clone pynfs, build and install it"
 	@echo "pynfs-{baseline,dev}              - Run the pynfs test on baseline  or dev hosts and collect results"
+	@echo "pynfs-visualize                   - Generate HTML visualization of test results"
 	@echo ""
 
 HELP_TARGETS += pynfs-help-menu
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 0/8] nfs: few fixes and enhancements
  2025-10-03 20:19 [PATCH v2 0/8] nfs: few fixes and enhancements Chuck Lever
                   ` (7 preceding siblings ...)
  2025-10-03 20:19 ` [PATCH v2 8/8] pynfs: add visualization support for test results Chuck Lever
@ 2025-10-03 22:57 ` Luis Chamberlain
  8 siblings, 0 replies; 10+ messages in thread
From: Luis Chamberlain @ 2025-10-03 22:57 UTC (permalink / raw)
  To: Chuck Lever; +Cc: kdevops, Chuck Lever

On Fri, Oct 03, 2025 at 04:19:48PM -0400, Chuck Lever wrote:
> From: Chuck Lever <chuck.lever@oracle.com>
> 
> Original cover:
> 
> In prepration for talking about NFS tests at the MSST conference today
> I figured I'd give a run to all NFS tests. I ran out of time but at
> least this plumbed quite a bit of the stuff to get some results out.
> 
> The iSCSI stuff is likely not correct, and can be dropped. So, feel
> free to take in only what makes sense and drop whatever silly thing
> you see.
> 
> Updates:
> 
> I'm reposting because, strangely, I never received these patches in
> my inbox. I pulled this series from lore using "b4 am" so we can
> keep reviewing.
> 
> For "devconfig: exclude nfsd from journal upload client
> configuration", I wonder if instead of "nfsd", the new checks
> should look for the "service" group, which usually includes nfsd,
> the SMB server, the iscsi target, and the Kerberos KDC. Any opinion
> on that?
> 
> I've dropped the iSCSI-specific changes. Except for a couple of
> nits, the remaining patches look great to me.
> 
> The original idea for pNFS block testing was that a separate iSCSI
> target was to be set up: either it is outside the kdevops test
> network, or it is enabled by setting CONFIG_KDEVOPS_ENABLE_ISCSI.
> Then change the kdevops nfsd server to use that iSCSI target by
> changing the "Persistent storage for exported file systems" setting
> to "iSCSI".
> 
> It's easier overall if the iSCSI target host is separate from the
> kdevops nfsd host. iSCSI loopback is not as performant and is less
> reliable, I've found -- and maybe not supported on every Linux
> distribution we want to run kdevops on, IIRC.
> 
> The gitr, nfstest, and pynfs workflows should all be able to use
> pNFS block, and indeed the latter two have additional tests
> especially for pNFS block layout. So fstests-specific pNFS block
> patches don't seem right to me.
> 
> Luis, if you believe I missed something and need to revisit one or
> more of the patches I've left out, please don't hesitate to bring
> it up.

Luis great, sorry the machines think I'm a spammer somehow! Please push!

  Luis

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2025-10-03 22:57 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-03 20:19 [PATCH v2 0/8] nfs: few fixes and enhancements Chuck Lever
2025-10-03 20:19 ` [PATCH v2 1/8] defconfigs: add NFS testing configurations Chuck Lever
2025-10-03 20:19 ` [PATCH v2 2/8] devconfig: exclude nfsd from journal upload client configuration Chuck Lever
2025-10-03 20:19 ` [PATCH v2 3/8] iscsi: add missing initiator packages for Debian Chuck Lever
2025-10-03 20:19 ` [PATCH v2 4/8] nfsd_add_export: fix become method for filesystem formatting Chuck Lever
2025-10-03 20:19 ` [PATCH v2 5/8] workflows: fstests: fix incorrect pNFS export configuration Chuck Lever
2025-10-03 20:19 ` [PATCH v2 6/8] nfstest: add results visualization support Chuck Lever
2025-10-03 20:19 ` [PATCH v2 7/8] fstests: add soak duration to nfs template Chuck Lever
2025-10-03 20:19 ` [PATCH v2 8/8] pynfs: add visualization support for test results Chuck Lever
2025-10-03 22:57 ` [PATCH v2 0/8] nfs: few fixes and enhancements Luis Chamberlain

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox