public inbox for kdevops@lists.linux.dev
 help / color / mirror / Atom feed
From: Luis Chamberlain <mcgrof@kernel.org>
To: Chuck Lever <cel@kernel.org>, Daniel Gomez <da.gomez@kruces.com>,
	kdevops@lists.linux.dev
Cc: Luis Chamberlain <mcgrof@kernel.org>
Subject: [PATCH 11/13] nfstest: add results visualization support
Date: Mon, 22 Sep 2025 02:36:53 -0700	[thread overview]
Message-ID: <20250922093656.2361016-12-mcgrof@kernel.org> (raw)
In-Reply-To: <20250922093656.2361016-1-mcgrof@kernel.org>

Add make nfstests-results-visualize target to generate HTML visualization
of NFS test results. This processes test logs from workflows/nfstest/results/last-run
and creates a self-contained HTML report with charts and statistics.

The visualization includes:
- Overall test summary with pass/fail statistics
- Interactive pie charts for test results
- Detailed results grouped by NFS protocol version
- Collapsible sections for easy navigation
- Test configuration details

Usage: make nfstests-results-visualize
Output: workflows/nfstest/results/html/

This makes it easy to analyze test results and share them by simply
copying the html directory via scp.

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 workflows/Makefile                            |   4 +
 workflows/nfstest/Makefile                    |   1 +
 .../nfstest/scripts/generate_nfstest_html.py  | 783 ++++++++++++++++++
 .../nfstest/scripts/parse_nfstest_results.py  | 277 +++++++
 .../scripts/visualize_nfstest_results.sh      |  61 ++
 5 files changed, 1126 insertions(+)
 create mode 100755 workflows/nfstest/scripts/generate_nfstest_html.py
 create mode 100755 workflows/nfstest/scripts/parse_nfstest_results.py
 create mode 100755 workflows/nfstest/scripts/visualize_nfstest_results.sh

diff --git a/workflows/Makefile b/workflows/Makefile
index 05c75a2d..58b56688 100644
--- a/workflows/Makefile
+++ b/workflows/Makefile
@@ -50,6 +50,10 @@ ifeq (y,$(CONFIG_KDEVOPS_WORKFLOW_ENABLE_NFSTEST))
 include workflows/nfstest/Makefile
 endif # CONFIG_KDEVOPS_WORKFLOW_ENABLE_NFSTEST == y
 
+# Always available nfstest visualization target
+nfstests-results-visualize:
+	$(Q)bash $(shell pwd)/workflows/nfstest/scripts/visualize_nfstest_results.sh
+
 ifeq (y,$(CONFIG_KDEVOPS_WORKFLOW_ENABLE_SYSBENCH))
 include workflows/sysbench/Makefile
 endif # CONFIG_KDEVOPS_WORKFLOW_ENABLE_SYSBENCH == y
diff --git a/workflows/nfstest/Makefile b/workflows/nfstest/Makefile
index fca7a51a..4bd8e147 100644
--- a/workflows/nfstest/Makefile
+++ b/workflows/nfstest/Makefile
@@ -99,6 +99,7 @@ nfstest-help-menu:
 	@echo "nfstest options:"
 	@echo "nfstest                              - Git clone nfstest and install it"
 	@echo "nfstest-{baseline,dev}               - Run selected nfstests on baseline or dev hosts and collect results"
+	@echo "nfstests-results-visualize           - Generate HTML visualization of test results"
 	@echo ""
 
 HELP_TARGETS += nfstest-help-menu
diff --git a/workflows/nfstest/scripts/generate_nfstest_html.py b/workflows/nfstest/scripts/generate_nfstest_html.py
new file mode 100755
index 00000000..277992ae
--- /dev/null
+++ b/workflows/nfstest/scripts/generate_nfstest_html.py
@@ -0,0 +1,783 @@
+#!/usr/bin/env python3
+"""
+Generate HTML visualization for NFS test results
+"""
+
+import json
+import os
+import sys
+import glob
+import base64
+from datetime import datetime
+from pathlib import Path
+from collections import defaultdict
+
+# Try to import matplotlib, but make it optional
+try:
+    import matplotlib
+
+    matplotlib.use("Agg")
+    import matplotlib.pyplot as plt
+    import matplotlib.patches as mpatches
+
+    HAS_MATPLOTLIB = True
+except ImportError:
+    HAS_MATPLOTLIB = False
+    print(
+        "Warning: matplotlib not found. Graphs will not be generated.", file=sys.stderr
+    )
+
+HTML_TEMPLATE = """
+<!DOCTYPE html>
+<html lang="en">
+<head>
+    <meta charset="UTF-8">
+    <meta name="viewport" content="width=device-width, initial-scale=1.0">
+    <title>NFS Test Results - {timestamp}</title>
+    <style>
+        :root {{
+            --primary-color: #2c3e50;
+            --secondary-color: #3498db;
+            --success-color: #27ae60;
+            --danger-color: #e74c3c;
+            --warning-color: #f39c12;
+            --light-bg: #ecf0f1;
+            --card-shadow: 0 2px 4px rgba(0,0,0,0.1);
+        }}
+
+        * {{
+            margin: 0;
+            padding: 0;
+            box-sizing: border-box;
+        }}
+
+        body {{
+            font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
+            line-height: 1.6;
+            color: #333;
+            background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
+            min-height: 100vh;
+            padding: 20px;
+        }}
+
+        .container {{
+            max-width: 1400px;
+            margin: 0 auto;
+            background: white;
+            border-radius: 12px;
+            overflow: hidden;
+            box-shadow: 0 20px 60px rgba(0,0,0,0.3);
+        }}
+
+        .header {{
+            background: var(--primary-color);
+            color: white;
+            padding: 40px;
+            text-align: center;
+            position: relative;
+            overflow: hidden;
+        }}
+
+        .header::before {{
+            content: '';
+            position: absolute;
+            top: 0;
+            left: 0;
+            right: 0;
+            bottom: 0;
+            background: linear-gradient(135deg, rgba(52, 152, 219, 0.1), rgba(46, 204, 113, 0.1));
+        }}
+
+        h1 {{
+            margin: 0;
+            font-size: 2.5em;
+            position: relative;
+            z-index: 1;
+        }}
+
+        .subtitle {{
+            margin-top: 10px;
+            opacity: 0.9;
+            font-size: 1.1em;
+            position: relative;
+            z-index: 1;
+        }}
+
+        .content {{
+            padding: 40px;
+        }}
+
+        .summary-grid {{
+            display: grid;
+            grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
+            gap: 20px;
+            margin-bottom: 40px;
+        }}
+
+        .summary-card {{
+            background: white;
+            border: 1px solid #e0e0e0;
+            padding: 25px;
+            border-radius: 10px;
+            text-align: center;
+            transition: all 0.3s ease;
+            box-shadow: var(--card-shadow);
+        }}
+
+        .summary-card:hover {{
+            transform: translateY(-5px);
+            box-shadow: 0 5px 20px rgba(0,0,0,0.15);
+        }}
+
+        .summary-card.success {{
+            background: linear-gradient(135deg, #667eea20 0%, #27ae6020 100%);
+            border-color: var(--success-color);
+        }}
+
+        .summary-card.danger {{
+            background: linear-gradient(135deg, #e74c3c20 0%, #c0392b20 100%);
+            border-color: var(--danger-color);
+        }}
+
+        .summary-card .value {{
+            font-size: 2.5em;
+            font-weight: bold;
+            margin: 10px 0;
+        }}
+
+        .summary-card.success .value {{
+            color: var(--success-color);
+        }}
+
+        .summary-card.danger .value {{
+            color: var(--danger-color);
+        }}
+
+        .summary-card .label {{
+            color: #7f8c8d;
+            font-size: 0.95em;
+            text-transform: uppercase;
+            letter-spacing: 1px;
+        }}
+
+        .test-suite {{
+            background: white;
+            border: 1px solid #e0e0e0;
+            border-radius: 10px;
+            margin-bottom: 30px;
+            overflow: hidden;
+            box-shadow: var(--card-shadow);
+        }}
+
+        .suite-header {{
+            background: linear-gradient(135deg, var(--secondary-color), #5dade2);
+            color: white;
+            padding: 20px 30px;
+            cursor: pointer;
+            position: relative;
+            transition: all 0.3s ease;
+        }}
+
+        .suite-header:hover {{
+            background: linear-gradient(135deg, #2980b9, var(--secondary-color));
+        }}
+
+        .suite-header h2 {{
+            margin: 0 0 10px 0;
+            font-size: 1.5em;
+            display: flex;
+            align-items: center;
+            justify-content: space-between;
+        }}
+
+        .suite-stats {{
+            display: flex;
+            gap: 20px;
+            font-size: 0.9em;
+            opacity: 0.95;
+        }}
+
+        .suite-content {{
+            padding: 25px;
+            background: #fafafa;
+            max-height: 0;
+            overflow: hidden;
+            transition: max-height 0.5s ease;
+        }}
+
+        .suite-content.expanded {{
+            max-height: 2000px;
+        }}
+
+        .test-table {{
+            width: 100%;
+            border-collapse: collapse;
+            background: white;
+            border-radius: 8px;
+            overflow: hidden;
+        }}
+
+        .test-table th {{
+            background: var(--primary-color);
+            color: white;
+            padding: 12px;
+            text-align: left;
+            font-weight: 600;
+        }}
+
+        .test-table td {{
+            padding: 12px;
+            border-bottom: 1px solid #e0e0e0;
+        }}
+
+        .test-table tr:last-child td {{
+            border-bottom: none;
+        }}
+
+        .test-table tr:hover {{
+            background: #f5f5f5;
+        }}
+
+        .status {{
+            display: inline-block;
+            padding: 4px 12px;
+            border-radius: 20px;
+            font-size: 0.85em;
+            font-weight: 600;
+            text-transform: uppercase;
+        }}
+
+        .status.passed {{
+            background: var(--success-color);
+            color: white;
+        }}
+
+        .status.failed {{
+            background: var(--danger-color);
+            color: white;
+        }}
+
+        .status.skipped {{
+            background: var(--warning-color);
+            color: white;
+        }}
+
+        .progress-bar {{
+            width: 100%;
+            height: 30px;
+            background: #e0e0e0;
+            border-radius: 15px;
+            overflow: hidden;
+            margin: 20px 0;
+            box-shadow: inset 0 2px 4px rgba(0,0,0,0.1);
+        }}
+
+        .progress-fill {{
+            height: 100%;
+            display: flex;
+            transition: width 0.5s ease;
+        }}
+
+        .progress-passed {{
+            background: linear-gradient(135deg, var(--success-color), #2ecc71);
+        }}
+
+        .progress-failed {{
+            background: linear-gradient(135deg, var(--danger-color), #c0392b);
+        }}
+
+        .progress-skipped {{
+            background: linear-gradient(135deg, var(--warning-color), #e67e22);
+        }}
+
+        .graph-container {{
+            margin: 30px 0;
+            text-align: center;
+        }}
+
+        .graph-container img {{
+            max-width: 100%;
+            height: auto;
+            border-radius: 8px;
+            box-shadow: var(--card-shadow);
+        }}
+
+        .config-section {{
+            background: #f8f9fa;
+            border-left: 4px solid var(--secondary-color);
+            padding: 20px;
+            margin: 30px 0;
+            border-radius: 4px;
+        }}
+
+        .config-section h3 {{
+            color: var(--primary-color);
+            margin-bottom: 15px;
+        }}
+
+        .config-grid {{
+            display: grid;
+            grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
+            gap: 10px;
+        }}
+
+        .config-item {{
+            display: flex;
+            padding: 8px;
+            background: white;
+            border-radius: 4px;
+        }}
+
+        .config-key {{
+            font-weight: 600;
+            color: var(--primary-color);
+            margin-right: 10px;
+        }}
+
+        .config-value {{
+            color: #555;
+        }}
+
+        .footer {{
+            text-align: center;
+            padding: 20px;
+            background: var(--light-bg);
+            color: #7f8c8d;
+            border-top: 1px solid #e0e0e0;
+        }}
+
+        .toggle-icon {{
+            transition: transform 0.3s ease;
+            display: inline-block;
+        }}
+
+        .suite-header.expanded .toggle-icon {{
+            transform: rotate(90deg);
+        }}
+
+        @media (max-width: 768px) {{
+            .summary-grid {{
+                grid-template-columns: 1fr;
+            }}
+
+            .config-grid {{
+                grid-template-columns: 1fr;
+            }}
+
+            h1 {{
+                font-size: 1.8em;
+            }}
+        }}
+    </style>
+</head>
+<body>
+    <div class="container">
+        <div class="header">
+            <h1>🧪 NFS Test Results</h1>
+            <div class="subtitle">Generated on {timestamp}</div>
+        </div>
+
+        <div class="content">
+            <!-- Summary Cards -->
+            <div class="summary-grid">
+                <div class="summary-card">
+                    <div class="label">Total Tests</div>
+                    <div class="value">{total_tests}</div>
+                </div>
+                <div class="summary-card success">
+                    <div class="label">Passed</div>
+                    <div class="value">{passed_tests}</div>
+                </div>
+                <div class="summary-card danger">
+                    <div class="label">Failed</div>
+                    <div class="value">{failed_tests}</div>
+                </div>
+                <div class="summary-card">
+                    <div class="label">Pass Rate</div>
+                    <div class="value">{pass_rate:.1f}%</div>
+                </div>
+                <div class="summary-card">
+                    <div class="label">Total Time</div>
+                    <div class="value">{total_time}</div>
+                </div>
+                <div class="summary-card">
+                    <div class="label">Test Suites</div>
+                    <div class="value">{num_suites}</div>
+                </div>
+            </div>
+
+            <!-- Overall Progress Bar -->
+            <div class="progress-bar">
+                <div class="progress-fill" style="width: 100%;">
+                    <div class="progress-passed" style="width: {pass_percentage:.1f}%;"></div>
+                    <div class="progress-failed" style="width: {fail_percentage:.1f}%;"></div>
+                </div>
+            </div>
+
+            <!-- Graphs -->
+            {graphs_html}
+
+            <!-- Test Suites -->
+            <h2 style="margin: 40px 0 20px 0; color: var(--primary-color);">Test Suite Details</h2>
+            {test_suites_html}
+
+            <!-- Configuration -->
+            {config_html}
+        </div>
+
+        <div class="footer">
+            <p>Generated by kdevops NFS Test Visualization</p>
+            <p>Report generated at {timestamp}</p>
+        </div>
+    </div>
+
+    <script>
+        // Toggle test suite expansion
+        document.querySelectorAll('.suite-header').forEach(header => {{
+            header.addEventListener('click', () => {{
+                header.classList.toggle('expanded');
+                const content = header.nextElementSibling;
+                content.classList.toggle('expanded');
+            }});
+        }});
+
+        // Auto-expand suites with failures
+        document.addEventListener('DOMContentLoaded', () => {{
+            document.querySelectorAll('.suite-header[data-has-failures="true"]').forEach(header => {{
+                header.click();
+            }});
+        }});
+    </script>
+</body>
+</html>
+"""
+
+
+def format_time(seconds):
+    """Format seconds into human-readable time"""
+    if seconds < 60:
+        return f"{seconds:.1f}s"
+    elif seconds < 3600:
+        minutes = seconds / 60
+        return f"{minutes:.1f}m"
+    else:
+        hours = seconds / 3600
+        return f"{hours:.1f}h"
+
+
+def generate_suite_chart(suite_name, suite_data, output_dir):
+    """Generate a pie chart for test suite results"""
+    if not HAS_MATPLOTLIB:
+        return None
+
+    try:
+        # Count results
+        passed = sum(r["summary"]["passed"] for r in suite_data)
+        failed = sum(r["summary"]["failed"] for r in suite_data)
+
+        if passed + failed == 0:
+            return None
+
+        # Create pie chart
+        fig, ax = plt.subplots(figsize=(6, 6))
+        labels = []
+        sizes = []
+        colors = []
+
+        if passed > 0:
+            labels.append(f"Passed ({passed})")
+            sizes.append(passed)
+            colors.append("#27ae60")
+
+        if failed > 0:
+            labels.append(f"Failed ({failed})")
+            sizes.append(failed)
+            colors.append("#e74c3c")
+
+        ax.pie(
+            sizes,
+            labels=labels,
+            colors=colors,
+            autopct="%1.1f%%",
+            startangle=90,
+            textprops={"fontsize": 12},
+        )
+        ax.set_title(
+            f"{suite_name.upper()} Test Results", fontsize=14, fontweight="bold"
+        )
+
+        # Save to file
+        chart_path = os.path.join(output_dir, f"{suite_name}_pie_chart.png")
+        plt.savefig(chart_path, dpi=100, bbox_inches="tight", transparent=True)
+        plt.close()
+
+        return chart_path
+    except Exception as e:
+        print(
+            f"Warning: Could not generate chart for {suite_name}: {e}", file=sys.stderr
+        )
+        return None
+
+
+def generate_overall_chart(summary, output_dir):
+    """Generate overall test results chart"""
+    if not HAS_MATPLOTLIB:
+        return None
+
+    try:
+        # Create figure with two subplots
+        fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 6))
+
+        # Pie chart for pass/fail
+        passed = summary["total_passed"]
+        failed = summary["total_failed"]
+
+        if passed + failed > 0:
+            sizes = [passed, failed]
+            labels = [f"Passed ({passed})", f"Failed ({failed})"]
+            colors = ["#27ae60", "#e74c3c"]
+
+            ax1.pie(
+                sizes,
+                labels=labels,
+                colors=colors,
+                autopct="%1.1f%%",
+                startangle=90,
+                textprops={"fontsize": 12},
+            )
+            ax1.set_title("Overall Test Results", fontsize=14, fontweight="bold")
+
+        # Bar chart for test suites
+        if summary["test_suites_run"]:
+            suites = summary["test_suites_run"]
+            suite_counts = [len(summary.get(s, [])) for s in suites]
+
+            bars = ax2.bar(range(len(suites)), suite_counts, color="#3498db")
+            ax2.set_xlabel("Test Suite", fontsize=12)
+            ax2.set_ylabel("Number of Tests", fontsize=12)
+            ax2.set_title("Tests per Suite", fontsize=14, fontweight="bold")
+            ax2.set_xticks(range(len(suites)))
+            ax2.set_xticklabels(suites, rotation=45, ha="right")
+
+            # Add value labels on bars
+            for bar in bars:
+                height = bar.get_height()
+                ax2.text(
+                    bar.get_x() + bar.get_width() / 2.0,
+                    height,
+                    f"{int(height)}",
+                    ha="center",
+                    va="bottom",
+                )
+
+        plt.tight_layout()
+
+        # Save to file
+        chart_path = os.path.join(output_dir, "overall_results.png")
+        plt.savefig(chart_path, dpi=100, bbox_inches="tight", transparent=True)
+        plt.close()
+
+        return chart_path
+    except Exception as e:
+        print(f"Warning: Could not generate overall chart: {e}", file=sys.stderr)
+        return None
+
+
+def embed_image(image_path):
+    """Embed image as base64 data URI"""
+    if not os.path.exists(image_path):
+        return None
+
+    try:
+        with open(image_path, "rb") as f:
+            data = base64.b64encode(f.read()).decode()
+        return f"data:image/png;base64,{data}"
+    except:
+        return None
+
+
+def generate_html(results, output_dir):
+    """Generate HTML report from parsed results"""
+    summary = results["overall_summary"]
+
+    # Calculate statistics
+    total_tests = summary["total_tests"]
+    passed_tests = summary["total_passed"]
+    failed_tests = summary["total_failed"]
+    pass_rate = (passed_tests / total_tests * 100) if total_tests > 0 else 0
+    pass_percentage = pass_rate
+    fail_percentage = 100 - pass_percentage
+    total_time = format_time(summary["total_time"])
+    num_suites = len(summary["test_suites_run"])
+
+    # Generate graphs
+    graphs_html = ""
+    overall_chart = generate_overall_chart(summary, output_dir)
+    if overall_chart:
+        img_data = embed_image(overall_chart)
+        if img_data:
+            graphs_html += f"""
+            <div class="graph-container">
+                <h2 style="color: var(--primary-color); margin-bottom: 20px;">Test Results Overview</h2>
+                <img src="{img_data}" alt="Overall Results">
+            </div>
+            """
+
+    # Generate test suites HTML
+    test_suites_html = ""
+    for suite_name, suite_data in results["test_suites"].items():
+        if not suite_data:
+            continue
+
+        # Calculate suite statistics
+        suite_total = sum(r["summary"]["total"] for r in suite_data)
+        suite_passed = sum(r["summary"]["passed"] for r in suite_data)
+        suite_failed = sum(r["summary"]["failed"] for r in suite_data)
+        suite_time = sum(r["summary"]["total_time"] for r in suite_data)
+        has_failures = suite_failed > 0
+
+        # Generate suite chart
+        suite_chart = generate_suite_chart(suite_name, suite_data, output_dir)
+
+        # Build test details table
+        test_rows = ""
+        for result in suite_data:
+            for test in result["tests"]:
+                status_class = test["status"].lower()
+                test_rows += f"""
+                <tr>
+                    <td>{test['name']}</td>
+                    <td>{test['description'][:100]}...</td>
+                    <td><span class="status {status_class}">{test['status']}</span></td>
+                    <td>{test['duration']:.3f}s</td>
+                </tr>
+                """
+
+        # Build suite HTML
+        test_suites_html += f"""
+        <div class="test-suite">
+            <div class="suite-header" data-has-failures="{str(has_failures).lower()}">
+                <h2>
+                    <span><span class="toggle-icon">▶</span> {suite_name.upper()}</span>
+                    <span style="font-size: 0.7em; font-weight: normal;">
+                        {suite_passed}/{suite_total} passed
+                    </span>
+                </h2>
+                <div class="suite-stats">
+                    <span>✓ Passed: {suite_passed}</span>
+                    <span>✗ Failed: {suite_failed}</span>
+                    <span>⏱ Time: {format_time(suite_time)}</span>
+                </div>
+            </div>
+            <div class="suite-content">
+                {f'<div class="graph-container"><img src="{embed_image(suite_chart)}" alt="{suite_name} Results"></div>' if suite_chart and embed_image(suite_chart) else ''}
+                <table class="test-table">
+                    <thead>
+                        <tr>
+                            <th>Test Name</th>
+                            <th>Description</th>
+                            <th>Status</th>
+                            <th>Duration</th>
+                        </tr>
+                    </thead>
+                    <tbody>
+                        {test_rows}
+                    </tbody>
+                </table>
+            </div>
+        </div>
+        """
+
+    # Generate configuration HTML
+    config_html = ""
+    if results["test_suites"]:
+        # Get configuration from first test suite
+        for suite_data in results["test_suites"].values():
+            if suite_data and suite_data[0]["configuration"]:
+                config = suite_data[0]["configuration"]
+                config_items = ""
+                for key, value in sorted(config.items()):
+                    if key and value and value != "None":
+                        config_items += f"""
+                        <div class="config-item">
+                            <span class="config-key">{key.replace('_', ' ').title()}:</span>
+                            <span class="config-value">{value}</span>
+                        </div>
+                        """
+
+                if config_items:
+                    config_html = f"""
+                    <div class="config-section">
+                        <h3>Test Configuration</h3>
+                        <div class="config-grid">
+                            {config_items}
+                        </div>
+                    </div>
+                    """
+                break
+
+    # Generate final HTML
+    timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
+    html_content = HTML_TEMPLATE.format(
+        timestamp=timestamp,
+        total_tests=total_tests,
+        passed_tests=passed_tests,
+        failed_tests=failed_tests,
+        pass_rate=pass_rate,
+        pass_percentage=pass_percentage,
+        fail_percentage=fail_percentage,
+        total_time=total_time,
+        num_suites=num_suites,
+        graphs_html=graphs_html,
+        test_suites_html=test_suites_html,
+        config_html=config_html,
+    )
+
+    # Write HTML file
+    html_path = os.path.join(output_dir, "index.html")
+    with open(html_path, "w") as f:
+        f.write(html_content)
+
+    return html_path
+
+
+def main():
+    """Main entry point"""
+    if len(sys.argv) > 1:
+        results_dir = sys.argv[1]
+    else:
+        results_dir = "workflows/nfstest/results/last-run"
+
+    if not os.path.exists(results_dir):
+        print(
+            f"Error: Results directory '{results_dir}' does not exist", file=sys.stderr
+        )
+        sys.exit(1)
+
+    # Check for parsed results
+    parsed_file = os.path.join(results_dir, "parsed_results.json")
+    if not os.path.exists(parsed_file):
+        print(
+            f"Error: Parsed results file not found. Run parse_nfstest_results.py first.",
+            file=sys.stderr,
+        )
+        sys.exit(1)
+
+    # Load parsed results
+    with open(parsed_file, "r") as f:
+        results = json.load(f)
+
+    # Create HTML output directory - use absolute path from results_dir
+    base_dir = os.path.dirname(os.path.dirname(os.path.abspath(results_dir)))
+    html_dir = os.path.join(base_dir, "html")
+    os.makedirs(html_dir, exist_ok=True)
+
+    # Generate HTML report
+    html_path = generate_html(results, html_dir)
+
+    print(f"HTML report generated: {html_path}")
+    print(f"Directory ready for transfer: {html_dir}")
+
+
+if __name__ == "__main__":
+    main()
diff --git a/workflows/nfstest/scripts/parse_nfstest_results.py b/workflows/nfstest/scripts/parse_nfstest_results.py
new file mode 100755
index 00000000..40d638fa
--- /dev/null
+++ b/workflows/nfstest/scripts/parse_nfstest_results.py
@@ -0,0 +1,277 @@
+#!/usr/bin/env python3
+"""
+Parse NFS test results from log files and extract key metrics.
+"""
+
+import os
+import re
+import sys
+import json
+import glob
+from datetime import datetime
+from pathlib import Path
+from collections import defaultdict
+
+
+def parse_timestamp(timestamp_str):
+    """Parse timestamp from log format"""
+    try:
+        # Handle format: 17:18:41.048703
+        time_parts = timestamp_str.split(":")
+        if len(time_parts) == 3:
+            hours = int(time_parts[0])
+            minutes = int(time_parts[1])
+            seconds = float(time_parts[2])
+            return hours * 3600 + minutes * 60 + seconds
+    except:
+        pass
+    return 0
+
+
+def parse_test_log(log_path):
+    """Parse a single NFS test log file"""
+    results = {
+        "file": os.path.basename(log_path),
+        "test_suite": "",
+        "tests": [],
+        "summary": {
+            "total": 0,
+            "passed": 0,
+            "failed": 0,
+            "skipped": 0,
+            "total_time": 0,
+        },
+        "configuration": {},
+        "test_groups": defaultdict(list),
+    }
+
+    # Determine test suite from filename
+    if "interop" in log_path:
+        results["test_suite"] = "interop"
+    elif "alloc" in log_path:
+        results["test_suite"] = "alloc"
+    elif "dio" in log_path:
+        results["test_suite"] = "dio"
+    elif "lock" in log_path:
+        results["test_suite"] = "lock"
+    elif "posix" in log_path:
+        results["test_suite"] = "posix"
+    elif "sparse" in log_path:
+        results["test_suite"] = "sparse"
+    elif "ssc" in log_path:
+        results["test_suite"] = "ssc"
+
+    current_test = None
+    start_time = None
+
+    with open(log_path, "r") as f:
+        lines = f.readlines()
+
+    for i, line in enumerate(lines):
+        # Parse configuration options
+        if line.strip().startswith("OPTS:") and "--" in line:
+            opts_match = re.search(r"OPTS:.*?-\s*(.+?)(?:--|\s*$)", line)
+            if opts_match:
+                opt_str = opts_match.group(1).strip()
+                if "=" in opt_str:
+                    key = opt_str.split("=")[0].replace("-", "_")
+                    value = opt_str.split("=", 1)[1] if "=" in opt_str else "true"
+                    results["configuration"][key] = value
+
+        # Parse individual OPTS lines for configuration
+        if line.strip().startswith("OPTS:") and "=" in line and "--" not in line:
+            opts_match = re.search(r"OPTS:.*?-\s*(\w+)\s*=\s*(.+)", line)
+            if opts_match:
+                key = opts_match.group(1).replace("-", "_")
+                value = opts_match.group(2).strip()
+                results["configuration"][key] = value
+
+        # Parse test start
+        if line.startswith("*** "):
+            test_desc = line[4:].strip()
+            current_test = {
+                "name": "",
+                "description": test_desc,
+                "status": "unknown",
+                "duration": 0,
+                "errors": [],
+            }
+
+        # Parse test name
+        if "TEST: Running test" in line:
+            test_match = re.search(r"Running test '(\w+)'", line)
+            if test_match and current_test:
+                current_test["name"] = test_match.group(1)
+
+        # Parse test results
+        if line.strip().startswith("PASS:"):
+            if current_test:
+                current_test["status"] = "passed"
+                pass_msg = line.split("PASS:", 1)[1].strip()
+                if "assertions" not in current_test:
+                    current_test["assertions"] = []
+                current_test["assertions"].append(
+                    {"status": "PASS", "message": pass_msg}
+                )
+
+        if line.strip().startswith("FAIL:"):
+            if current_test:
+                current_test["status"] = "failed"
+                fail_msg = line.split("FAIL:", 1)[1].strip()
+                current_test["errors"].append(fail_msg)
+                if "assertions" not in current_test:
+                    current_test["assertions"] = []
+                current_test["assertions"].append(
+                    {"status": "FAIL", "message": fail_msg}
+                )
+
+        # Parse test timing
+        if line.strip().startswith("TIME:"):
+            time_match = re.search(r"TIME:\s*([\d.]+)([ms]?)", line)
+            if time_match and current_test:
+                duration = float(time_match.group(1))
+                unit = time_match.group(2) if time_match.group(2) else "s"
+                if unit == "m":
+                    duration *= 60
+                elif unit == "ms":
+                    duration /= 1000
+                current_test["duration"] = duration
+                results["tests"].append(current_test)
+
+                # Group tests by category (first part of test name)
+                if current_test["name"]:
+                    # Group by NFS version tested
+                    if "NFSv3" in current_test["description"]:
+                        results["test_groups"]["NFSv3"].append(current_test)
+                    if "NFSv4" in current_test["description"]:
+                        if "NFSv4.1" in current_test["description"]:
+                            results["test_groups"]["NFSv4.1"].append(current_test)
+                        else:
+                            results["test_groups"]["NFSv4.0"].append(current_test)
+
+                current_test = None
+
+        # Parse final summary
+        if "tests (" in line and "passed," in line:
+            summary_match = re.search(
+                r"(\d+)\s+tests\s*\((\d+)\s+passed,\s*(\d+)\s+failed", line
+            )
+            if summary_match:
+                results["summary"]["total"] = int(summary_match.group(1))
+                results["summary"]["passed"] = int(summary_match.group(2))
+                results["summary"]["failed"] = int(summary_match.group(3))
+
+        # Parse total time
+        if line.startswith("Total time:"):
+            time_match = re.search(r"Total time:\s*(.+)", line)
+            if time_match:
+                time_str = time_match.group(1).strip()
+                # Convert format like "2m22.099818s" to seconds
+                total_seconds = 0
+                if "m" in time_str:
+                    parts = time_str.split("m")
+                    total_seconds += int(parts[0]) * 60
+                    if len(parts) > 1:
+                        seconds_part = parts[1].replace("s", "").strip()
+                        if seconds_part:
+                            total_seconds += float(seconds_part)
+                elif "s" in time_str:
+                    total_seconds = float(time_str.replace("s", "").strip())
+                results["summary"]["total_time"] = total_seconds
+
+    return results
+
+
+def parse_all_results(results_dir):
+    """Parse all test results in a directory"""
+    all_results = {
+        "timestamp": datetime.now().isoformat(),
+        "test_suites": {},
+        "overall_summary": {
+            "total_tests": 0,
+            "total_passed": 0,
+            "total_failed": 0,
+            "total_time": 0,
+            "test_suites_run": [],
+        },
+    }
+
+    # Find all log files
+    log_pattern = os.path.join(results_dir, "**/*.log")
+    log_files = glob.glob(log_pattern, recursive=True)
+
+    for log_file in sorted(log_files):
+        # Parse the log file
+        suite_results = parse_test_log(log_file)
+
+        # Determine suite category from path
+        if "/interop/" in log_file:
+            suite_key = "interop"
+        elif "/alloc/" in log_file:
+            suite_key = "alloc"
+        elif "/dio/" in log_file:
+            suite_key = "dio"
+        elif "/lock/" in log_file:
+            suite_key = "lock"
+        elif "/posix/" in log_file:
+            suite_key = "posix"
+        elif "/sparse/" in log_file:
+            suite_key = "sparse"
+        elif "/ssc/" in log_file:
+            suite_key = "ssc"
+        else:
+            suite_key = suite_results["test_suite"] or "unknown"
+
+        # Store results
+        if suite_key not in all_results["test_suites"]:
+            all_results["test_suites"][suite_key] = []
+            all_results["overall_summary"]["test_suites_run"].append(suite_key)
+
+        all_results["test_suites"][suite_key].append(suite_results)
+
+        # Update overall summary
+        all_results["overall_summary"]["total_tests"] += suite_results["summary"][
+            "total"
+        ]
+        all_results["overall_summary"]["total_passed"] += suite_results["summary"][
+            "passed"
+        ]
+        all_results["overall_summary"]["total_failed"] += suite_results["summary"][
+            "failed"
+        ]
+        all_results["overall_summary"]["total_time"] += suite_results["summary"][
+            "total_time"
+        ]
+
+    return all_results
+
+
+def main():
+    """Main entry point"""
+    if len(sys.argv) > 1:
+        results_dir = sys.argv[1]
+    else:
+        results_dir = "workflows/nfstest/results/last-run"
+
+    if not os.path.exists(results_dir):
+        print(
+            f"Error: Results directory '{results_dir}' does not exist", file=sys.stderr
+        )
+        sys.exit(1)
+
+    # Parse all results
+    results = parse_all_results(results_dir)
+
+    # Output as JSON
+    print(json.dumps(results, indent=2))
+
+    # Save to file
+    output_file = os.path.join(results_dir, "parsed_results.json")
+    with open(output_file, "w") as f:
+        json.dump(results, f, indent=2)
+
+    print(f"\nResults saved to: {output_file}", file=sys.stderr)
+
+
+if __name__ == "__main__":
+    main()
diff --git a/workflows/nfstest/scripts/visualize_nfstest_results.sh b/workflows/nfstest/scripts/visualize_nfstest_results.sh
new file mode 100755
index 00000000..e7ddbfa4
--- /dev/null
+++ b/workflows/nfstest/scripts/visualize_nfstest_results.sh
@@ -0,0 +1,61 @@
+#!/bin/bash
+# Visualize NFS test results
+
+SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
+KDEVOPS_DIR="$(git rev-parse --show-toplevel 2>/dev/null || pwd)"
+RESULTS_DIR="${1:-$KDEVOPS_DIR/workflows/nfstest/results/last-run}"
+HTML_OUTPUT_DIR="$KDEVOPS_DIR/workflows/nfstest/results/html"
+
+# Check if results directory exists
+if [ ! -d "$RESULTS_DIR" ]; then
+    echo "Error: Results directory '$RESULTS_DIR' does not exist"
+    echo "Please run 'make nfstest-baseline' or 'make nfstest-dev' first to generate test results"
+    exit 1
+fi
+
+# Check if there are any log files
+LOG_COUNT=$(find "$RESULTS_DIR" -name "*.log" 2>/dev/null | wc -l)
+if [ "$LOG_COUNT" -eq 0 ]; then
+    echo "Error: No test log files found in '$RESULTS_DIR'"
+    echo "Please run NFS tests first to generate results"
+    exit 1
+fi
+
+echo "Processing NFS test results from: $RESULTS_DIR"
+
+# Parse the results
+echo "Step 1: Parsing test results..."
+python3 "$SCRIPT_DIR/parse_nfstest_results.py" "$RESULTS_DIR"
+if [ $? -ne 0 ]; then
+    echo "Error: Failed to parse test results"
+    exit 1
+fi
+
+# Generate HTML visualization
+echo "Step 2: Generating HTML visualization..."
+python3 "$SCRIPT_DIR/generate_nfstest_html.py" "$RESULTS_DIR"
+if [ $? -ne 0 ]; then
+    echo "Warning: HTML generation completed with warnings"
+fi
+
+# Check if HTML was generated
+if [ -f "$HTML_OUTPUT_DIR/index.html" ]; then
+    echo ""
+    echo "✓ Visualization complete!"
+    echo ""
+    echo "Results available in: $HTML_OUTPUT_DIR/"
+    echo ""
+    echo "To view locally:"
+    echo "  open $HTML_OUTPUT_DIR/index.html"
+    echo ""
+    echo "To copy to remote system:"
+    echo "  scp -r $HTML_OUTPUT_DIR/ user@remote:/path/to/destination/"
+    echo ""
+
+    # List generated files
+    echo "Generated files:"
+    ls -lh "$HTML_OUTPUT_DIR/"
+else
+    echo "Error: HTML generation failed - no index.html created"
+    exit 1
+fi
-- 
2.51.0


  parent reply	other threads:[~2025-09-22  9:36 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-22  9:36 [PATCH 00/13] nfs: few fixes and enhancements Luis Chamberlain
2025-09-22  9:36 ` [PATCH 01/13] defconfigs: add NFS testing configurations Luis Chamberlain
2025-09-22  9:36 ` [PATCH 02/13] devconfig: exclude nfsd from journal upload client configuration Luis Chamberlain
2025-09-22  9:36 ` [PATCH 03/13] iscsi: add missing initiator packages for Debian Luis Chamberlain
2025-09-22  9:36 ` [PATCH 04/13] fstests: fix pNFS block layout iSCSI setup Luis Chamberlain
2025-09-22  9:36 ` [PATCH 05/13] nfsd/fstests: fix pNFS block layout iSCSI configuration Luis Chamberlain
2025-09-22  9:36 ` [PATCH 06/13] fstests: set up iSCSI target on NFS server before test nodes Luis Chamberlain
2025-09-22  9:36 ` [PATCH 07/13] fstests: move conditional to play level for iSCSI setup Luis Chamberlain
2025-09-22  9:36 ` [PATCH 08/13] fstests: temporarily disable iSCSI setup for pNFS Luis Chamberlain
2025-09-22  9:36 ` [PATCH 09/13] nfsd_add_export: fix become method for filesystem formatting Luis Chamberlain
2025-09-22  9:36 ` [PATCH 10/13] workflows: fstests: fix incorrect pNFS export configuration Luis Chamberlain
2025-09-22  9:36 ` Luis Chamberlain [this message]
2025-09-22  9:36 ` [PATCH 12/13] fstests: add soak duration to nfs template Luis Chamberlain
2025-09-22  9:36 ` [PATCH 13/13] pynfs: add visualization support for test results Luis Chamberlain
2025-09-22  9:46 ` [PATCH 00/13] nfs: few fixes and enhancements Luis Chamberlain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250922093656.2361016-12-mcgrof@kernel.org \
    --to=mcgrof@kernel.org \
    --cc=cel@kernel.org \
    --cc=da.gomez@kruces.com \
    --cc=kdevops@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox