public inbox for kdevops@lists.linux.dev
 help / color / mirror / Atom feed
From: Luis Chamberlain <mcgrof@kernel.org>
To: Chuck Lever <cel@kernel.org>, Daniel Gomez <da.gomez@kruces.com>,
	kdevops@lists.linux.dev
Cc: hui81.qi@samsung.com, kundan.kumar@samsung.com,
	Luis Chamberlain <mcgrof@kernel.org>
Subject: [PATCH v2 4/4] minio: add MinIO Warp S3 benchmarking with declared hosts support
Date: Sat, 30 Aug 2025 21:12:00 -0700	[thread overview]
Message-ID: <20250831041202.2172115-5-mcgrof@kernel.org> (raw)
In-Reply-To: <20250831041202.2172115-1-mcgrof@kernel.org>

Add MinIO Warp S3 benchmarking workflow with support for declared hosts
(pre-existing infrastructure) and fix critical issues in template generation
and benchmark execution.

MinIO Warp Workflow:
- Add MinIO server deployment via Docker containers
- Implement Warp S3 benchmark suite with configurable parameters
- Support both single and comprehensive benchmark modes
- Add storage configuration for XFS/Btrfs/ext4 filesystems
- Include benchmark result analysis and visualization tools
- Fix benchmark duration handling (was terminating after 45s due to
  --autoterm flag with --objects parameter)
- Fix async timeout calculation for long-running benchmarks
- Add proper help targets to Makefile

Declared Hosts Support:
- Enable using pre-existing infrastructure (bare metal, cloud VMs)
- Skip bringup/teardown for systems with existing SSH access
- Add DECLARED_HOSTS and DATA_PATH CLI variable overrides
- Disable data partition creation when using declared hosts
- Restrict unreviewed workflows pending compatibility testing
- Auto-infer user/group settings on target systems

Template System Improvements:
- Simplify gen_hosts template selection using kdevops_workflow_name
- Reduce 40+ lines of conditionals to single dynamic inclusion
- Fix fstests host generation regression (was generating 251 hosts)
- Add workflow-specific template structure under workflows/*/

Build System Fixes:
- Add missing extra_vars.yaml dependencies to prevent build failures
- Fix Kconfig yaml output for minio_warp_run_comprehensive_suite
- Ensure proper task ordering in Ansible playbooks
- Fix Jinja2 syntax errors in benchmark scripts

The workflow now properly runs for configured durations (e.g., 30m)
and supports both dedicated test infrastructure and pre-existing
systems via declared hosts.

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 .gitignore                                    |   2 +
 defconfigs/minio-warp                         |  52 ++
 defconfigs/minio-warp-ab                      |  41 +
 defconfigs/minio-warp-btrfs                   |  35 +
 defconfigs/minio-warp-declared-hosts          |  56 ++
 defconfigs/minio-warp-multifs                 |  74 ++
 defconfigs/minio-warp-storage                 |  65 ++
 defconfigs/minio-warp-xfs                     |  35 +
 defconfigs/minio-warp-xfs-16k                 |  65 ++
 defconfigs/minio-warp-xfs-lbs                 |  65 ++
 kconfigs/workflows/Kconfig                    |  19 +
 playbooks/minio.yml                           |  53 ++
 playbooks/roles/ai_setup/tasks/main.yml       |  40 +-
 playbooks/roles/gen_hosts/tasks/main.yml      |  19 +-
 .../gen_hosts/templates/workflows/minio.j2    | 173 ++++
 playbooks/roles/gen_nodes/tasks/main.yml      | 128 ++-
 playbooks/roles/minio_destroy/tasks/main.yml  |  34 +
 playbooks/roles/minio_install/tasks/main.yml  |  61 ++
 playbooks/roles/minio_results/tasks/main.yml  |  86 ++
 playbooks/roles/minio_setup/defaults/main.yml |  16 +
 playbooks/roles/minio_setup/tasks/main.yml    | 100 ++
 .../roles/minio_uninstall/tasks/main.yml      |  17 +
 playbooks/roles/minio_warp_run/tasks/main.yml | 249 +++++
 .../templates/warp_config.json.j2             |  14 +
 workflows/Makefile                            |   4 +
 workflows/minio/Kconfig                       |  23 +
 workflows/minio/Kconfig.docker                |  66 ++
 workflows/minio/Kconfig.storage               | 364 ++++++++
 workflows/minio/Kconfig.warp                  | 141 +++
 workflows/minio/Makefile                      |  76 ++
 .../minio/scripts/analyze_warp_results.py     | 858 ++++++++++++++++++
 .../minio/scripts/generate_warp_report.py     | 404 +++++++++
 .../minio/scripts/run_benchmark_suite.sh      | 116 +++
 33 files changed, 3517 insertions(+), 34 deletions(-)
 create mode 100644 defconfigs/minio-warp
 create mode 100644 defconfigs/minio-warp-ab
 create mode 100644 defconfigs/minio-warp-btrfs
 create mode 100644 defconfigs/minio-warp-declared-hosts
 create mode 100644 defconfigs/minio-warp-multifs
 create mode 100644 defconfigs/minio-warp-storage
 create mode 100644 defconfigs/minio-warp-xfs
 create mode 100644 defconfigs/minio-warp-xfs-16k
 create mode 100644 defconfigs/minio-warp-xfs-lbs
 create mode 100644 playbooks/minio.yml
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/minio.j2
 create mode 100644 playbooks/roles/minio_destroy/tasks/main.yml
 create mode 100644 playbooks/roles/minio_install/tasks/main.yml
 create mode 100644 playbooks/roles/minio_results/tasks/main.yml
 create mode 100644 playbooks/roles/minio_setup/defaults/main.yml
 create mode 100644 playbooks/roles/minio_setup/tasks/main.yml
 create mode 100644 playbooks/roles/minio_uninstall/tasks/main.yml
 create mode 100644 playbooks/roles/minio_warp_run/tasks/main.yml
 create mode 100644 playbooks/roles/minio_warp_run/templates/warp_config.json.j2
 create mode 100644 workflows/minio/Kconfig
 create mode 100644 workflows/minio/Kconfig.docker
 create mode 100644 workflows/minio/Kconfig.storage
 create mode 100644 workflows/minio/Kconfig.warp
 create mode 100644 workflows/minio/Makefile
 create mode 100755 workflows/minio/scripts/analyze_warp_results.py
 create mode 100755 workflows/minio/scripts/generate_warp_report.py
 create mode 100755 workflows/minio/scripts/run_benchmark_suite.sh

diff --git a/.gitignore b/.gitignore
index b725aba..67ab9d2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -89,6 +89,8 @@ playbooks/roles/linux-mirror/linux-mirror-systemd/mirrors.yaml
 #   yet.
 workflows/selftests/results/
 
+workflows/minio/results/
+
 workflows/linux/refs/default/Kconfig.linus
 workflows/linux/refs/default/Kconfig.next
 workflows/linux/refs/default/Kconfig.stable
diff --git a/defconfigs/minio-warp b/defconfigs/minio-warp
new file mode 100644
index 0000000..4d55c2d
--- /dev/null
+++ b/defconfigs/minio-warp
@@ -0,0 +1,52 @@
+#
+# MinIO Warp S3 benchmarking configuration
+#
+# Automatically generated file; DO NOT EDIT.
+# kdevops 5.0.2 Configuration
+#
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+#
+# MinIO Docker Configuration
+#
+CONFIG_MINIO_CONTAINER_IMAGE_STRING="minio/minio:RELEASE.2024-01-16T16-07-38Z"
+CONFIG_MINIO_CONTAINER_NAME="minio-warp-server"
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-warp-network"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+
+#
+# Warp Benchmark Configuration
+#
+CONFIG_MINIO_WARP_BENCHMARK_MIXED=y
+CONFIG_MINIO_WARP_BENCHMARK_TYPE="mixed"
+CONFIG_MINIO_WARP_DURATION="5m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=10
+CONFIG_MINIO_WARP_OBJECT_SIZE="1MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=y
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
+
+#
+# Host Configuration
+#
+CONFIG_KDEVOPS_HOSTS_PREFIX="minio"
+
+#
+# Node configuration
+#
+CONFIG_KDEVOPS_NODES_TEMPLATE="guestfs-libvirt"
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME_SIZE_GIB=10
\ No newline at end of file
diff --git a/defconfigs/minio-warp-ab b/defconfigs/minio-warp-ab
new file mode 100644
index 0000000..f20142d
--- /dev/null
+++ b/defconfigs/minio-warp-ab
@@ -0,0 +1,41 @@
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+# A/B Testing Configuration
+CONFIG_KDEVOPS_BASELINE_AND_DEV=y
+
+# MinIO Configuration
+CONFIG_MINIO_ENABLE=y
+CONFIG_MINIO_CONTAINER_IMAGE="minio/minio:latest"
+CONFIG_MINIO_CONTAINER_NAME="minio-server"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-network"
+
+# Warp Benchmark Configuration - Comprehensive Suite
+CONFIG_MINIO_WARP_BENCHMARK_MIXED=y
+CONFIG_MINIO_WARP_DURATION="2m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=20
+CONFIG_MINIO_WARP_OBJECT_SIZE="10MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=n
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
+
+# Enable web UI for monitoring
+CONFIG_MINIO_WARP_ENABLE_WEB_UI=y
+CONFIG_MINIO_WARP_WEB_UI_PORT=7762
+
+# Node configuration for A/B testing
+CONFIG_KDEVOPS_HOSTS_TEMPLATE="hosts.j2"
+CONFIG_KDEVOPS_NODES_TEMPLATE="nodes.j2"
+CONFIG_KDEVOPS_PLAYBOOK_DIR="playbooks"
+CONFIG_KDEVOPS_ANSIBLE_INVENTORY_FILE="hosts"
+CONFIG_KDEVOPS_NODES="nodes.yaml"
\ No newline at end of file
diff --git a/defconfigs/minio-warp-btrfs b/defconfigs/minio-warp-btrfs
new file mode 100644
index 0000000..85bd8fa
--- /dev/null
+++ b/defconfigs/minio-warp-btrfs
@@ -0,0 +1,35 @@
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+# MinIO Configuration for Btrfs testing
+CONFIG_MINIO_ENABLE=y
+CONFIG_MINIO_CONTAINER_IMAGE="minio/minio:latest"
+CONFIG_MINIO_CONTAINER_NAME="minio-server"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+
+# Configure Btrfs filesystem for MinIO storage
+CONFIG_MINIO_USE_CUSTOM_FILESYSTEM=y
+CONFIG_MINIO_STORAGE_DEVICE="/dev/nvme0n1"
+CONFIG_MINIO_STORAGE_FSTYPE="btrfs"
+CONFIG_MINIO_STORAGE_FS_OPTS="--nodesize 16k"
+CONFIG_MINIO_STORAGE_LABEL="minio-btrfs"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-network"
+
+# Comprehensive benchmark suite
+CONFIG_MINIO_WARP_RUN_COMPREHENSIVE_SUITE=y
+CONFIG_MINIO_WARP_DURATION="2m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=20
+CONFIG_MINIO_WARP_OBJECT_SIZE="10MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=n
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
\ No newline at end of file
diff --git a/defconfigs/minio-warp-declared-hosts b/defconfigs/minio-warp-declared-hosts
new file mode 100644
index 0000000..acf1e64
--- /dev/null
+++ b/defconfigs/minio-warp-declared-hosts
@@ -0,0 +1,56 @@
+#
+# MinIO Warp S3 benchmarking with declared hosts (bare metal or pre-existing infrastructure)
+#
+# Automatically generated file; DO NOT EDIT.
+# kdevops 5.0.2 Configuration
+#
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+# Skip bringup for declared hosts
+CONFIG_SKIP_BRINGUP=y
+CONFIG_KDEVOPS_USE_DECLARED_HOSTS=y
+
+#
+# MinIO Docker Configuration
+#
+CONFIG_MINIO_CONTAINER_IMAGE_STRING="minio/minio:RELEASE.2024-01-16T16-07-38Z"
+CONFIG_MINIO_CONTAINER_NAME="minio-warp-server"
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-warp-network"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+
+#
+# MinIO Storage Configuration
+#
+CONFIG_MINIO_STORAGE_ENABLE=y
+CONFIG_MINIO_MOUNT_POINT="/data/minio"
+CONFIG_MINIO_FSTYPE_XFS=y
+CONFIG_MINIO_FSTYPE="xfs"
+CONFIG_MINIO_XFS_BLOCKSIZE_16K=y
+CONFIG_MINIO_XFS_BLOCKSIZE=16384
+CONFIG_MINIO_XFS_SECTORSIZE_4K=y
+CONFIG_MINIO_XFS_SECTORSIZE=4096
+CONFIG_MINIO_XFS_MKFS_OPTS=""
+
+#
+# Warp Benchmark Configuration
+#
+CONFIG_MINIO_WARP_RUN_COMPREHENSIVE_SUITE=y
+CONFIG_MINIO_WARP_DURATION="5m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=10
+CONFIG_MINIO_WARP_OBJECT_SIZE="1MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=y
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
\ No newline at end of file
diff --git a/defconfigs/minio-warp-multifs b/defconfigs/minio-warp-multifs
new file mode 100644
index 0000000..8316a3f
--- /dev/null
+++ b/defconfigs/minio-warp-multifs
@@ -0,0 +1,74 @@
+#
+# MinIO Warp S3 benchmarking with multi-filesystem testing
+#
+# Automatically generated file; DO NOT EDIT.
+# kdevops 5.0.2 Configuration
+#
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+#
+# MinIO Docker Configuration
+#
+CONFIG_MINIO_CONTAINER_IMAGE_STRING="minio/minio:RELEASE.2024-01-16T16-07-38Z"
+CONFIG_MINIO_CONTAINER_NAME="minio-warp-server"
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-warp-network"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+
+#
+# MinIO Storage Configuration with Multi-filesystem Testing
+#
+CONFIG_MINIO_STORAGE_ENABLE=y
+CONFIG_MINIO_MOUNT_POINT="/data/minio"
+CONFIG_MINIO_ENABLE_MULTIFS_TESTING=y
+
+# XFS configurations
+CONFIG_MINIO_MULTIFS_TEST_XFS=y
+CONFIG_MINIO_MULTIFS_XFS_4K_4KS=y
+CONFIG_MINIO_MULTIFS_XFS_16K_4KS=y
+
+# ext4 configurations
+CONFIG_MINIO_MULTIFS_TEST_EXT4=y
+CONFIG_MINIO_MULTIFS_EXT4_4K=y
+
+# btrfs configurations
+CONFIG_MINIO_MULTIFS_TEST_BTRFS=y
+CONFIG_MINIO_MULTIFS_BTRFS_DEFAULT=y
+
+CONFIG_MINIO_MULTIFS_RESULTS_DIR="/data/minio-multifs-benchmark"
+
+#
+# Warp Benchmark Configuration
+#
+CONFIG_MINIO_WARP_RUN_COMPREHENSIVE_SUITE=y
+CONFIG_MINIO_WARP_DURATION="5m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=10
+CONFIG_MINIO_WARP_OBJECT_SIZE="1MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=y
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
+
+#
+# Host Configuration
+#
+CONFIG_KDEVOPS_HOSTS_PREFIX="minio"
+
+#
+# Node configuration
+#
+CONFIG_KDEVOPS_NODES_TEMPLATE="guestfs-libvirt"
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME_SIZE_GIB=100
+CONFIG_LIBVIRT_EXTRA_NUM_DRIVES=1
diff --git a/defconfigs/minio-warp-storage b/defconfigs/minio-warp-storage
new file mode 100644
index 0000000..7f86212
--- /dev/null
+++ b/defconfigs/minio-warp-storage
@@ -0,0 +1,65 @@
+#
+# MinIO Warp S3 benchmarking with dedicated storage configuration
+#
+# Automatically generated file; DO NOT EDIT.
+# kdevops 5.0.2 Configuration
+#
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+#
+# MinIO Docker Configuration
+#
+CONFIG_MINIO_CONTAINER_IMAGE_STRING="minio/minio:RELEASE.2024-01-16T16-07-38Z"
+CONFIG_MINIO_CONTAINER_NAME="minio-warp-server"
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-warp-network"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+
+#
+# MinIO Storage Configuration
+#
+CONFIG_MINIO_STORAGE_ENABLE=y
+CONFIG_MINIO_MOUNT_POINT="/data/minio"
+CONFIG_MINIO_FSTYPE_XFS=y
+CONFIG_MINIO_FSTYPE="xfs"
+CONFIG_MINIO_XFS_BLOCKSIZE_4K=y
+CONFIG_MINIO_XFS_BLOCKSIZE=4096
+CONFIG_MINIO_XFS_SECTORSIZE_4K=y
+CONFIG_MINIO_XFS_SECTORSIZE=4096
+CONFIG_MINIO_XFS_MKFS_OPTS=""
+
+#
+# Warp Benchmark Configuration
+#
+CONFIG_MINIO_WARP_RUN_COMPREHENSIVE_SUITE=y
+CONFIG_MINIO_WARP_DURATION="5m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=10
+CONFIG_MINIO_WARP_OBJECT_SIZE="1MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=y
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
+
+#
+# Host Configuration
+#
+CONFIG_KDEVOPS_HOSTS_PREFIX="minio"
+
+#
+# Node configuration
+#
+CONFIG_KDEVOPS_NODES_TEMPLATE="guestfs-libvirt"
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME_SIZE_GIB=100
+CONFIG_LIBVIRT_EXTRA_NUM_DRIVES=1
\ No newline at end of file
diff --git a/defconfigs/minio-warp-xfs b/defconfigs/minio-warp-xfs
new file mode 100644
index 0000000..95c4a64
--- /dev/null
+++ b/defconfigs/minio-warp-xfs
@@ -0,0 +1,35 @@
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+# MinIO Configuration for XFS testing
+CONFIG_MINIO_ENABLE=y
+CONFIG_MINIO_CONTAINER_IMAGE="minio/minio:latest"
+CONFIG_MINIO_CONTAINER_NAME="minio-server"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+
+# Configure XFS filesystem for MinIO storage
+CONFIG_MINIO_USE_CUSTOM_FILESYSTEM=y
+CONFIG_MINIO_STORAGE_DEVICE="/dev/nvme0n1"
+CONFIG_MINIO_STORAGE_FSTYPE="xfs"
+CONFIG_MINIO_STORAGE_FS_OPTS="-b size=4k -s size=4k"
+CONFIG_MINIO_STORAGE_LABEL="minio-xfs"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-network"
+
+# Comprehensive benchmark suite
+CONFIG_MINIO_WARP_RUN_COMPREHENSIVE_SUITE=y
+CONFIG_MINIO_WARP_DURATION="2m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=20
+CONFIG_MINIO_WARP_OBJECT_SIZE="10MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=n
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
\ No newline at end of file
diff --git a/defconfigs/minio-warp-xfs-16k b/defconfigs/minio-warp-xfs-16k
new file mode 100644
index 0000000..82a90b9
--- /dev/null
+++ b/defconfigs/minio-warp-xfs-16k
@@ -0,0 +1,65 @@
+#
+# MinIO Warp S3 benchmarking with XFS 16K block size configuration
+#
+# Automatically generated file; DO NOT EDIT.
+# kdevops 5.0.2 Configuration
+#
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+#
+# MinIO Docker Configuration
+#
+CONFIG_MINIO_CONTAINER_IMAGE_STRING="minio/minio:RELEASE.2024-01-16T16-07-38Z"
+CONFIG_MINIO_CONTAINER_NAME="minio-warp-server"
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-warp-network"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+
+#
+# MinIO Storage Configuration - XFS with 16K blocks
+#
+CONFIG_MINIO_STORAGE_ENABLE=y
+CONFIG_MINIO_MOUNT_POINT="/data/minio"
+CONFIG_MINIO_FSTYPE_XFS=y
+CONFIG_MINIO_FSTYPE="xfs"
+CONFIG_MINIO_XFS_BLOCKSIZE_16K=y
+CONFIG_MINIO_XFS_BLOCKSIZE=16384
+CONFIG_MINIO_XFS_SECTORSIZE_4K=y
+CONFIG_MINIO_XFS_SECTORSIZE=4096
+CONFIG_MINIO_XFS_MKFS_OPTS=""
+
+#
+# Warp Benchmark Configuration
+#
+CONFIG_MINIO_WARP_RUN_COMPREHENSIVE_SUITE=y
+CONFIG_MINIO_WARP_DURATION="5m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=10
+CONFIG_MINIO_WARP_OBJECT_SIZE="1MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=y
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
+
+#
+# Host Configuration
+#
+CONFIG_KDEVOPS_HOSTS_PREFIX="minio"
+
+#
+# Node configuration
+#
+CONFIG_KDEVOPS_NODES_TEMPLATE="guestfs-libvirt"
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME_SIZE_GIB=100
+CONFIG_LIBVIRT_EXTRA_NUM_DRIVES=1
diff --git a/defconfigs/minio-warp-xfs-lbs b/defconfigs/minio-warp-xfs-lbs
new file mode 100644
index 0000000..7400954
--- /dev/null
+++ b/defconfigs/minio-warp-xfs-lbs
@@ -0,0 +1,65 @@
+#
+# MinIO Warp S3 benchmarking with XFS Large Block Size (LBS) configuration
+#
+# Automatically generated file; DO NOT EDIT.
+# kdevops 5.0.2 Configuration
+#
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+#
+# MinIO Docker Configuration
+#
+CONFIG_MINIO_CONTAINER_IMAGE_STRING="minio/minio:RELEASE.2024-01-16T16-07-38Z"
+CONFIG_MINIO_CONTAINER_NAME="minio-warp-server"
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-warp-network"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+
+#
+# MinIO Storage Configuration - XFS with 64K blocks (LBS)
+#
+CONFIG_MINIO_STORAGE_ENABLE=y
+CONFIG_MINIO_MOUNT_POINT="/data/minio"
+CONFIG_MINIO_FSTYPE_XFS=y
+CONFIG_MINIO_FSTYPE="xfs"
+CONFIG_MINIO_XFS_BLOCKSIZE_64K=y
+CONFIG_MINIO_XFS_BLOCKSIZE=65536
+CONFIG_MINIO_XFS_SECTORSIZE_4K=y
+CONFIG_MINIO_XFS_SECTORSIZE=4096
+CONFIG_MINIO_XFS_MKFS_OPTS=""
+
+#
+# Warp Benchmark Configuration - Large objects for LBS testing
+#
+CONFIG_MINIO_WARP_RUN_COMPREHENSIVE_SUITE=y
+CONFIG_MINIO_WARP_DURATION="10m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=20
+CONFIG_MINIO_WARP_OBJECT_SIZE="10MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=50
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=y
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
+
+#
+# Host Configuration
+#
+CONFIG_KDEVOPS_HOSTS_PREFIX="minio"
+
+#
+# Node configuration
+#
+CONFIG_KDEVOPS_NODES_TEMPLATE="guestfs-libvirt"
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME_SIZE_GIB=200
+CONFIG_LIBVIRT_EXTRA_NUM_DRIVES=1
\ No newline at end of file
diff --git a/kconfigs/workflows/Kconfig b/kconfigs/workflows/Kconfig
index cca0b70..73ba976 100644
--- a/kconfigs/workflows/Kconfig
+++ b/kconfigs/workflows/Kconfig
@@ -233,6 +233,13 @@ config KDEVOPS_WORKFLOW_DEDICATE_AI
 	  This will dedicate your configuration to running only the
 	  AI workflow for vector database performance testing.
 
+config KDEVOPS_WORKFLOW_DEDICATE_MINIO
+	bool "minio"
+	select KDEVOPS_WORKFLOW_ENABLE_MINIO
+	help
+	  This will dedicate your configuration to running only the
+	  MinIO workflow for S3 storage benchmarking with Warp testing.
+
 endchoice
 
 config KDEVOPS_WORKFLOW_NAME
@@ -250,6 +257,7 @@ config KDEVOPS_WORKFLOW_NAME
 	default "mmtests" if KDEVOPS_WORKFLOW_DEDICATE_MMTESTS
 	default "fio-tests" if KDEVOPS_WORKFLOW_DEDICATE_FIO_TESTS
 	default "ai" if KDEVOPS_WORKFLOW_DEDICATE_AI
+	default "minio" if KDEVOPS_WORKFLOW_DEDICATE_MINIO
 
 endif
 
@@ -513,6 +521,17 @@ source "workflows/ai/Kconfig"
 endmenu
 endif # KDEVOPS_WORKFLOW_ENABLE_AI
 
+config KDEVOPS_WORKFLOW_ENABLE_MINIO
+	bool
+	output yaml
+	default y if KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_MINIO || KDEVOPS_WORKFLOW_DEDICATE_MINIO
+
+if KDEVOPS_WORKFLOW_ENABLE_MINIO
+menu "Configure and run MinIO S3 benchmarks"
+source "workflows/minio/Kconfig"
+endmenu
+endif # KDEVOPS_WORKFLOW_ENABLE_MINIO
+
 config KDEVOPS_WORKFLOW_ENABLE_SSD_STEADY_STATE
        bool "Attain SSD steady state prior to tests"
        output yaml
diff --git a/playbooks/minio.yml b/playbooks/minio.yml
new file mode 100644
index 0000000..bf80bbf
--- /dev/null
+++ b/playbooks/minio.yml
@@ -0,0 +1,53 @@
+---
+# MinIO S3 Storage Benchmarking Playbook
+
+- name: Install MinIO and setup
+  hosts: minio
+  become: true
+  become_user: root
+  tags: ['minio_install']
+  roles:
+    - role: minio_install
+    - role: minio_setup
+      vars:
+        minio_container_image: "{{ minio_container_image_string }}"
+        minio_container_name: "{{ minio_container_name }}"
+        minio_api_port: "{{ minio_api_port }}"
+        minio_console_port: "{{ minio_console_port }}"
+        minio_access_key: "{{ minio_access_key }}"
+        minio_secret_key: "{{ minio_secret_key }}"
+        minio_data_path: "{{ minio_data_path }}"
+        minio_memory_limit: "{{ minio_memory_limit }}"
+        minio_docker_network: "{{ minio_docker_network_name }}"
+
+- name: Run MinIO Warp benchmarks
+  hosts: minio
+  become: true
+  become_user: root
+  tags: ['minio_warp']
+  roles:
+    - role: minio_warp_run
+
+- name: Uninstall MinIO
+  hosts: minio
+  become: true
+  become_user: root
+  tags: ['minio_uninstall']
+  roles:
+    - role: minio_uninstall
+
+- name: Destroy MinIO and cleanup
+  hosts: minio
+  become: true
+  become_user: root
+  tags: ['minio_destroy']
+  roles:
+    - role: minio_destroy
+
+- name: Analyze MinIO results
+  hosts: minio
+  become: true
+  become_user: root
+  tags: ['minio_results']
+  roles:
+    - role: minio_results
diff --git a/playbooks/roles/ai_setup/tasks/main.yml b/playbooks/roles/ai_setup/tasks/main.yml
index b894c96..899fcee 100644
--- a/playbooks/roles/ai_setup/tasks/main.yml
+++ b/playbooks/roles/ai_setup/tasks/main.yml
@@ -15,7 +15,6 @@
   loop:
     - "{{ ai_docker_data_path }}"
     - "{{ ai_docker_etcd_data_path }}"
-    - "{{ ai_docker_minio_data_path }}"
   when: ai_milvus_docker | bool
   become: true
 
@@ -50,24 +49,20 @@
     memory: "{{ ai_etcd_memory_limit }}"
   when: ai_milvus_docker | bool
 
-- name: Start MinIO container
-  community.docker.docker_container:
-    name: "{{ ai_minio_container_name }}"
-    image: "{{ ai_minio_container_image_string }}"
-    state: started
-    restart_policy: unless-stopped
-    networks:
-      - name: "{{ ai_docker_network_name }}"
-    ports:
-      - "{{ ai_minio_api_port }}:9000"
-      - "{{ ai_minio_console_port }}:9001"
-    env:
-      MINIO_ACCESS_KEY: "{{ ai_minio_access_key }}"
-      MINIO_SECRET_KEY: "{{ ai_minio_secret_key }}"
-    ansible.builtin.command: server /minio_data --console-address ":9001"
-    volumes:
-      - "{{ ai_docker_minio_data_path }}:/minio_data"
-    memory: "{{ ai_minio_memory_limit }}"
+- name: Setup MinIO using shared role
+  include_role:
+    name: minio_setup
+  vars:
+    minio_container_image: "{{ ai_minio_container_image_string }}"
+    minio_container_name: "{{ ai_minio_container_name }}"
+    minio_api_port: "{{ ai_minio_api_port }}"
+    minio_console_port: "{{ ai_minio_console_port }}"
+    minio_access_key: "{{ ai_minio_access_key }}"
+    minio_secret_key: "{{ ai_minio_secret_key }}"
+    minio_data_path: "{{ ai_docker_minio_data_path }}"
+    minio_memory_limit: "{{ ai_minio_memory_limit }}"
+    minio_docker_network: "{{ ai_docker_network_name }}"
+    minio_create_network: false  # Network already created above
   when: ai_milvus_docker | bool
 
 - name: Wait for etcd to be ready
@@ -77,13 +72,6 @@
     timeout: 60
   when: ai_milvus_docker | bool
 
-- name: Wait for MinIO to be ready
-  ansible.builtin.wait_for:
-    host: localhost
-    port: "{{ ai_minio_api_port }}"
-    timeout: 60
-  when: ai_milvus_docker | bool
-
 - name: Start Milvus container
   community.docker.docker_container:
     name: "{{ ai_milvus_container_name }}"
diff --git a/playbooks/roles/gen_hosts/tasks/main.yml b/playbooks/roles/gen_hosts/tasks/main.yml
index d44566a..6cdbdb7 100644
--- a/playbooks/roles/gen_hosts/tasks/main.yml
+++ b/playbooks/roles/gen_hosts/tasks/main.yml
@@ -179,7 +179,7 @@
     name: guestfs_nodes
   when:
     - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_ai
+    - kdevops_workflow_enable_ai|default(false)|bool
     - ai_enable_multifs_testing|default(false)|bool
     - ansible_hosts_template.stat.exists
 
@@ -188,7 +188,7 @@
     all_generic_nodes: "{{ guestfs_nodes.guestfs_nodes | map(attribute='name') | list }}"
   when:
     - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_ai
+    - kdevops_workflow_enable_ai|default(false)|bool
     - ai_enable_multifs_testing|default(false)|bool
     - guestfs_nodes is defined
 
@@ -221,6 +221,21 @@
     state: touch
     mode: "0755"
 
+- name: Generate the Ansible hosts file for a dedicated MinIO setup
+  tags: ['hosts']
+  ansible.builtin.template:
+    src: "{{ kdevops_hosts_template }}"
+    dest: "{{ ansible_cfg_inventory }}"
+    force: true
+    trim_blocks: True
+    lstrip_blocks: True
+    mode: '0644'
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_minio|default(false)|bool
+    - ansible_hosts_template.stat.exists
+    - not kdevops_use_declared_hosts|default(false)|bool
+
 - name: Verify if final host file exists
   ansible.builtin.stat:
     path: "{{ ansible_cfg_inventory }}"
diff --git a/playbooks/roles/gen_hosts/templates/workflows/minio.j2 b/playbooks/roles/gen_hosts/templates/workflows/minio.j2
new file mode 100644
index 0000000..42ba326
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/minio.j2
@@ -0,0 +1,173 @@
+{# Workflow template for MinIO #}
+{% if minio_enable_multifs_testing|default(false)|bool %}
+{# Multi-filesystem MinIO configuration #}
+[all]
+localhost ansible_connection=local
+{% for config in minio_enabled_section_types|default([]) %}
+{{ config }}
+{% endfor %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{% for config in minio_enabled_section_types|default([]) %}
+{% if '-dev' not in config %}
+{{ config }}
+{% endif %}
+{% endfor %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{% for config in minio_enabled_section_types|default([]) %}
+{% if '-dev' in config %}
+{{ config }}
+{% endif %}
+{% endfor %}
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+[minio]
+{% for config in minio_enabled_section_types|default([]) %}
+{{ config }}
+{% endfor %}
+
+[minio:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{# Create filesystem-specific groups #}
+{% if minio_multifs_xfs_4k_4ks|default(false)|bool %}
+[minio-xfs-4k]
+{{ kdevops_host_prefix }}-minio-xfs-4k
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-xfs-4k-dev
+{% endif %}
+
+[minio-xfs-4k:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+minio_fstype = "xfs"
+minio_xfs_blocksize = 4096
+minio_xfs_sectorsize = 4096
+{% endif %}
+
+{% if minio_multifs_xfs_16k_4ks|default(false)|bool %}
+[minio-xfs-16k]
+{{ kdevops_host_prefix }}-minio-xfs-16k
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-xfs-16k-dev
+{% endif %}
+
+[minio-xfs-16k:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+minio_fstype = "xfs"
+minio_xfs_blocksize = 16384
+minio_xfs_sectorsize = 4096
+{% endif %}
+
+{% if minio_multifs_xfs_32k_4ks|default(false)|bool %}
+[minio-xfs-32k]
+{{ kdevops_host_prefix }}-minio-xfs-32k
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-xfs-32k-dev
+{% endif %}
+
+[minio-xfs-32k:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+minio_fstype = "xfs"
+minio_xfs_blocksize = 32768
+minio_xfs_sectorsize = 4096
+{% endif %}
+
+{% if minio_multifs_xfs_64k_4ks|default(false)|bool %}
+[minio-xfs-64k]
+{{ kdevops_host_prefix }}-minio-xfs-64k
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-xfs-64k-dev
+{% endif %}
+
+[minio-xfs-64k:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+minio_fstype = "xfs"
+minio_xfs_blocksize = 65536
+minio_xfs_sectorsize = 4096
+{% endif %}
+
+{% if minio_multifs_ext4_4k|default(false)|bool %}
+[minio-ext4-4k]
+{{ kdevops_host_prefix }}-minio-ext4-4k
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-ext4-4k-dev
+{% endif %}
+
+[minio-ext4-4k:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+minio_fstype = "ext4"
+minio_ext4_mkfs_opts = "-F"
+{% endif %}
+
+{% if minio_multifs_ext4_16k_bigalloc|default(false)|bool %}
+[minio-ext4-16k-bigalloc]
+{{ kdevops_host_prefix }}-minio-ext4-16k-bigalloc
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-ext4-16k-bigalloc-dev
+{% endif %}
+
+[minio-ext4-16k-bigalloc:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+minio_fstype = "ext4"
+minio_ext4_mkfs_opts = "-F -O bigalloc -C 16384"
+{% endif %}
+
+{% if minio_multifs_btrfs_default|default(false)|bool %}
+[minio-btrfs]
+{{ kdevops_host_prefix }}-minio-btrfs
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-btrfs-dev
+{% endif %}
+
+[minio-btrfs:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+minio_fstype = "btrfs"
+minio_btrfs_mkfs_opts = "-f"
+{% endif %}
+
+{% else %}
+{# Standard single-filesystem MinIO configuration #}
+[all]
+localhost ansible_connection=local
+{{ kdevops_host_prefix }}-minio
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-dev
+{% endif %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{{ kdevops_host_prefix }}-minio
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{{ kdevops_host_prefix }}-minio-dev
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[minio]
+{{ kdevops_host_prefix }}-minio
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-dev
+{% endif %}
+
+[minio:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
diff --git a/playbooks/roles/gen_nodes/tasks/main.yml b/playbooks/roles/gen_nodes/tasks/main.yml
index 1ab81d3..9bd9b84 100644
--- a/playbooks/roles/gen_nodes/tasks/main.yml
+++ b/playbooks/roles/gen_nodes/tasks/main.yml
@@ -681,7 +681,7 @@
     mode: '0644'
   when:
     - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_ai
+    - kdevops_workflow_enable_ai|default(false)|bool
     - ansible_nodes_template.stat.exists
     - not kdevops_baseline_and_dev
     - not ai_enable_multifs_testing|default(false)|bool
@@ -699,7 +699,7 @@
     mode: '0644'
   when:
     - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_ai
+    - kdevops_workflow_enable_ai|default(false)|bool
     - ansible_nodes_template.stat.exists
     - kdevops_baseline_and_dev
     - not ai_enable_multifs_testing|default(false)|bool
@@ -742,7 +742,7 @@
     ai_multifs_enabled_configs: "{{ (xfs_configs + ext4_configs + btrfs_configs) | unique }}"
   when:
     - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_ai
+    - kdevops_workflow_enable_ai|default(false)|bool
     - ai_enable_multifs_testing|default(false)|bool
     - ansible_nodes_template.stat.exists
 
@@ -753,7 +753,7 @@
     ai_enabled_section_types: "{{ filesystem_nodes }}"
   when:
     - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_ai
+    - kdevops_workflow_enable_ai|default(false)|bool
     - ai_enable_multifs_testing|default(false)|bool
     - ansible_nodes_template.stat.exists
     - not kdevops_baseline_and_dev
@@ -767,7 +767,7 @@
     ai_enabled_section_types: "{{ filesystem_nodes | product(['', '-dev']) | map('join') | list }}"
   when:
     - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_ai
+    - kdevops_workflow_enable_ai|default(false)|bool
     - ai_enable_multifs_testing|default(false)|bool
     - ansible_nodes_template.stat.exists
     - kdevops_baseline_and_dev
@@ -786,12 +786,128 @@
     force: yes
   when:
     - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_ai
+    - kdevops_workflow_enable_ai|default(false)|bool
     - ai_enable_multifs_testing|default(false)|bool
     - ansible_nodes_template.stat.exists
     - ai_enabled_section_types is defined
     - ai_enabled_section_types | length > 0
 
+# MinIO S3 Storage Testing workflow nodes
+
+# Multi-filesystem MinIO configurations
+- name: Collect enabled MinIO multi-filesystem configurations
+  vars:
+    xfs_configs: >-
+      {{
+        [] +
+        (['xfs-4k'] if minio_multifs_xfs_4k_4ks|default(false)|bool else []) +
+        (['xfs-16k'] if minio_multifs_xfs_16k_4ks|default(false)|bool else []) +
+        (['xfs-32k'] if minio_multifs_xfs_32k_4ks|default(false)|bool else []) +
+        (['xfs-64k'] if minio_multifs_xfs_64k_4ks|default(false)|bool else [])
+      }}
+    ext4_configs: >-
+      {{
+        [] +
+        (['ext4-4k'] if minio_multifs_ext4_4k|default(false)|bool else []) +
+        (['ext4-16k-bigalloc'] if minio_multifs_ext4_16k_bigalloc|default(false)|bool else [])
+      }}
+    btrfs_configs: >-
+      {{
+        [] +
+        (['btrfs'] if minio_multifs_btrfs_default|default(false)|bool else [])
+      }}
+  set_fact:
+    minio_multifs_enabled_configs: "{{ (xfs_configs + ext4_configs + btrfs_configs) | unique }}"
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_minio|default(false)|bool
+    - minio_enable_multifs_testing|default(false)|bool
+    - ansible_nodes_template.stat.exists
+
+- name: Create MinIO nodes for each filesystem configuration (no dev)
+  vars:
+    filesystem_nodes: "{{ [kdevops_host_prefix + '-minio-'] | product(minio_multifs_enabled_configs | default([])) | map('join') | list }}"
+  set_fact:
+    minio_enabled_section_types: "{{ filesystem_nodes }}"
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_minio|default(false)|bool
+    - minio_enable_multifs_testing|default(false)|bool
+    - ansible_nodes_template.stat.exists
+    - not kdevops_baseline_and_dev
+    - minio_multifs_enabled_configs is defined
+    - minio_multifs_enabled_configs | length > 0
+
+- name: Create MinIO nodes for each filesystem configuration with dev hosts
+  vars:
+    filesystem_nodes: "{{ [kdevops_host_prefix + '-minio-'] | product(minio_multifs_enabled_configs | default([])) | map('join') | list }}"
+  set_fact:
+    minio_enabled_section_types: "{{ filesystem_nodes | product(['', '-dev']) | map('join') | list }}"
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_minio|default(false)|bool
+    - minio_enable_multifs_testing|default(false)|bool
+    - ansible_nodes_template.stat.exists
+    - kdevops_baseline_and_dev
+    - minio_multifs_enabled_configs is defined
+    - minio_multifs_enabled_configs | length > 0
+
+- name: Generate the MinIO multi-filesystem kdevops nodes file using {{ kdevops_nodes_template }} as jinja2 source template
+  tags: [ 'hosts' ]
+  vars:
+    node_template: "{{ kdevops_nodes_template | basename }}"
+    nodes: "{{ minio_enabled_section_types }}"
+    all_generic_nodes: "{{ minio_enabled_section_types }}"
+  ansible.builtin.template:
+    src: "{{ node_template }}"
+    dest: "{{ topdir_path }}/{{ kdevops_nodes }}"
+    force: true
+    mode: '0644'
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_minio|default(false)|bool
+    - minio_enable_multifs_testing|default(false)|bool
+    - ansible_nodes_template.stat.exists
+    - minio_enabled_section_types is defined
+    - minio_enabled_section_types | length > 0
+
+# Standard MinIO single filesystem nodes
+- name: Generate the MinIO kdevops nodes file using {{ kdevops_nodes_template }} as jinja2 source template
+  tags: ['hosts']
+  vars:
+    node_template: "{{ kdevops_nodes_template | basename }}"
+    nodes: "{{ [kdevops_host_prefix + '-minio'] }}"
+    all_generic_nodes: "{{ [kdevops_host_prefix + '-minio'] }}"
+  ansible.builtin.template:
+    src: "{{ node_template }}"
+    dest: "{{ topdir_path }}/{{ kdevops_nodes }}"
+    force: true
+    mode: '0644'
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_minio|default(false)|bool
+    - ansible_nodes_template.stat.exists
+    - not kdevops_baseline_and_dev
+    - not minio_enable_multifs_testing|default(false)|bool
+
+- name: Generate the MinIO kdevops nodes file with dev hosts using {{ kdevops_nodes_template }} as jinja2 source template
+  tags: ['hosts']
+  vars:
+    node_template: "{{ kdevops_nodes_template | basename }}"
+    nodes: "{{ [kdevops_host_prefix + '-minio', kdevops_host_prefix + '-minio-dev'] }}"
+    all_generic_nodes: "{{ [kdevops_host_prefix + '-minio', kdevops_host_prefix + '-minio-dev'] }}"
+  ansible.builtin.template:
+    src: "{{ node_template }}"
+    dest: "{{ topdir_path }}/{{ kdevops_nodes }}"
+    force: true
+    mode: '0644'
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_minio|default(false)|bool
+    - ansible_nodes_template.stat.exists
+    - kdevops_baseline_and_dev
+    - not minio_enable_multifs_testing|default(false)|bool
+
 - name: Get the control host's timezone
   ansible.builtin.command: "timedatectl show -p Timezone --value"
   register: kdevops_host_timezone
diff --git a/playbooks/roles/minio_destroy/tasks/main.yml b/playbooks/roles/minio_destroy/tasks/main.yml
new file mode 100644
index 0000000..078cb13
--- /dev/null
+++ b/playbooks/roles/minio_destroy/tasks/main.yml
@@ -0,0 +1,34 @@
+---
+- name: Import optional extra_args file
+  include_vars: "{{ item }}"
+  ignore_errors: yes
+  with_items:
+    - "../extra_vars.yaml"
+  tags: vars
+
+- name: Stop and remove MinIO container
+  community.docker.docker_container:
+    name: "{{ minio_container_name }}"
+    state: absent
+  ignore_errors: yes
+
+- name: Remove Docker network
+  community.docker.docker_network:
+    name: "{{ minio_docker_network_name }}"
+    state: absent
+  ignore_errors: yes
+
+- name: Clean up MinIO data directory
+  file:
+    path: "{{ minio_data_path }}"
+    state: absent
+  when: minio_warp_enable_cleanup | default(true) | bool
+
+- name: Clean up temporary Warp results
+  file:
+    path: "/tmp/warp-results"
+    state: absent
+
+- name: Display MinIO destroy complete
+  debug:
+    msg: "MinIO containers and data have been cleaned up"
diff --git a/playbooks/roles/minio_install/tasks/main.yml b/playbooks/roles/minio_install/tasks/main.yml
new file mode 100644
index 0000000..9ea3d75
--- /dev/null
+++ b/playbooks/roles/minio_install/tasks/main.yml
@@ -0,0 +1,61 @@
+---
+- name: Import optional extra_args file
+  include_vars: "{{ item }}"
+  ignore_errors: yes
+  with_items:
+    - "../extra_vars.yaml"
+  tags: vars
+
+- name: Install Docker
+  package:
+    name:
+      - docker.io
+      - python3-docker
+    state: present
+  become: yes
+
+- name: Ensure Docker service is running
+  systemd:
+    name: docker
+    state: started
+    enabled: yes
+  become: yes
+
+- name: Add current user to docker group
+  user:
+    name: "{{ ansible_user | default('kdevops') }}"
+    groups: docker
+    append: yes
+  become: yes
+
+- name: Install MinIO Warp
+  block:
+    - name: Download MinIO Warp binary
+      get_url:
+        url: "https://github.com/minio/warp/releases/latest/download/warp_Linux_x86_64.tar.gz"
+        dest: "/tmp/warp_Linux_x86_64.tar.gz"
+        mode: '0644'
+
+    - name: Extract MinIO Warp
+      unarchive:
+        src: "/tmp/warp_Linux_x86_64.tar.gz"
+        dest: "/tmp"
+        remote_src: yes
+
+    - name: Install Warp binary
+      copy:
+        src: "/tmp/warp"
+        dest: "/usr/local/bin/warp"
+        mode: '0755'
+        owner: root
+        group: root
+        remote_src: yes
+      become: yes
+
+    - name: Clean up downloaded files
+      file:
+        path: "{{ item }}"
+        state: absent
+      loop:
+        - "/tmp/warp_Linux_x86_64.tar.gz"
+        - "/tmp/warp"
diff --git a/playbooks/roles/minio_results/tasks/main.yml b/playbooks/roles/minio_results/tasks/main.yml
new file mode 100644
index 0000000..7403855
--- /dev/null
+++ b/playbooks/roles/minio_results/tasks/main.yml
@@ -0,0 +1,86 @@
+---
+- name: Import optional extra_args file
+  include_vars: "{{ item }}"
+  ignore_errors: yes
+  with_items:
+    - "../extra_vars.yaml"
+  tags: vars
+
+- name: Create results analysis script
+  copy:
+    content: |
+      #!/usr/bin/env python3
+      import json
+      import glob
+      import os
+      import sys
+      from pathlib import Path
+
+      def analyze_warp_results():
+          results_dir = Path("{{ playbook_dir }}/../workflows/minio/results")
+          result_files = list(results_dir.glob("warp_benchmark_*.json"))
+
+          if not result_files:
+              print("No Warp benchmark results found.")
+              return
+
+          print("MinIO Warp Benchmark Results Summary")
+          print("=" * 50)
+
+          total_throughput = 0
+          total_requests = 0
+          for result_file in result_files:
+              try:
+                  with open(result_file, 'r') as f:
+                      data = json.load(f)
+
+                  hostname = result_file.name.split('_')[2]
+                  timestamp = result_file.name.split('_')[3].replace('.json', '')
+
+                  # Extract key metrics from Warp results
+                  if 'throughput' in data:
+                      throughput_mbps = data['throughput'].get('average', 0) / (1024 * 1024)
+                      total_throughput += throughput_mbps
+
+                  if 'requests' in data:
+                      req_per_sec = data['requests'].get('average', 0)
+                      total_requests += req_per_sec
+
+                  print(f"\nHost: {hostname}")
+                  print(f"Timestamp: {timestamp}")
+                  print(f"Throughput: {throughput_mbps:.2f} MB/s")
+                  print(f"Requests/sec: {req_per_sec:.2f}")
+
+                  if 'latency' in data:
+                      avg_latency = data['latency'].get('average', 0)
+                      print(f"Average Latency: {avg_latency:.2f} ms")
+
+              except Exception as e:
+                  print(f"Error processing {result_file}: {e}")
+          print("\n" + "=" * 50)
+          print(f"Total Throughput: {total_throughput:.2f} MB/s")
+          print(f"Total Requests/sec: {total_requests:.2f}")
+
+      if __name__ == "__main__":
+          analyze_warp_results()
+    dest: "/tmp/analyze_minio_results.py"
+    mode: '0755'
+  delegate_to: localhost
+  run_once: true
+
+- name: Run results analysis
+  command: python3 /tmp/analyze_minio_results.py
+  register: analysis_output
+  delegate_to: localhost
+  run_once: true
+
+- name: Display analysis results
+  debug:
+    var: analysis_output.stdout_lines
+
+- name: Create results summary file
+  copy:
+    content: "{{ analysis_output.stdout }}"
+    dest: "{{ playbook_dir }}/../workflows/minio/results/benchmark_summary.txt"
+  delegate_to: localhost
+  run_once: true
diff --git a/playbooks/roles/minio_setup/defaults/main.yml b/playbooks/roles/minio_setup/defaults/main.yml
new file mode 100644
index 0000000..1403010
--- /dev/null
+++ b/playbooks/roles/minio_setup/defaults/main.yml
@@ -0,0 +1,16 @@
+---
+# MinIO Docker container defaults
+minio_container_image: "minio/minio:RELEASE.2023-03-20T20-16-18Z"
+minio_container_name: "minio-server"
+minio_api_port: 9000
+minio_console_port: 9001
+minio_access_key: "minioadmin"
+minio_secret_key: "minioadmin"
+minio_data_path: "/data/minio"
+minio_memory_limit: "2g"
+minio_docker_network: "minio-network"
+
+# MinIO service configuration
+minio_enable: true
+minio_create_network: true
+minio_wait_for_ready: true
diff --git a/playbooks/roles/minio_setup/tasks/main.yml b/playbooks/roles/minio_setup/tasks/main.yml
new file mode 100644
index 0000000..db7e3d6
--- /dev/null
+++ b/playbooks/roles/minio_setup/tasks/main.yml
@@ -0,0 +1,100 @@
+---
+- name: Import optional extra_args file
+  include_vars: "{{ item }}"
+  ignore_errors: yes
+  with_items:
+    - "../extra_vars.yaml"
+  tags: vars
+
+- name: Setup dedicated MinIO storage filesystem if configured
+  when:
+    - minio_storage_enable | default(false) | bool
+    - minio_device is defined
+  block:
+    - name: Prepare filesystem mkfs options
+      set_fact:
+        minio_mkfs_opts: >-
+          {%- if minio_fstype == "xfs" -%}
+            -L miniostorage -f -b size={{ minio_xfs_blocksize | default(4096) }} -s size={{ minio_xfs_sectorsize | default(4096) }} {{ minio_xfs_mkfs_opts | default('') }}
+          {%- elif minio_fstype == "btrfs" -%}
+            -L miniostorage {{ minio_btrfs_mkfs_opts | default('-f') }}
+          {%- elif minio_fstype == "ext4" -%}
+            -L miniostorage {{ minio_ext4_mkfs_opts | default('-F') }}
+          {%- elif minio_fstype == "bcachefs" -%}
+            --label=miniostorage {{ minio_bcachefs_mkfs_opts | default('-f') }}
+          {%- else -%}
+            -L miniostorage -f
+          {%- endif -%}
+
+    - name: Create MinIO storage filesystem
+      include_role:
+        name: create_partition
+      vars:
+        disk_setup_device: "{{ minio_device }}"
+        disk_setup_fstype: "{{ minio_fstype | default('xfs') }}"
+        disk_setup_label: "miniostorage"
+        disk_setup_fs_opts: "{{ minio_mkfs_opts }}"
+        disk_setup_path: "{{ minio_mount_point | default('/data/minio') }}"
+        disk_setup_user: "root"
+        disk_setup_group: "root"
+
+- name: Create MinIO data directory
+  file:
+    path: "{{ minio_data_path | default('/data/minio') }}"
+    state: directory
+    mode: '0755'
+  when:
+    - not (minio_storage_enable | default(false) | bool)
+  become: true
+
+- name: Check filesystem type for MinIO data path
+  shell: df -T "{{ minio_data_path }}" | tail -1 | awk '{print $2}'
+  register: minio_fs_type
+  changed_when: false
+
+- name: Get filesystem details
+  shell: |
+    df -h "{{ minio_data_path }}" | tail -1
+  register: minio_fs_details
+  changed_when: false
+
+- name: Display filesystem information
+  debug:
+    msg: |
+      MinIO Storage Configuration:
+        Data Path: {{ minio_data_path }}
+        Filesystem: {{ minio_fs_type.stdout }}
+        Storage Details: {{ minio_fs_details.stdout }}
+
+- name: Create Docker network for MinIO
+  community.docker.docker_network:
+    name: "{{ minio_docker_network }}"
+    state: present
+  when: minio_enable | bool and minio_create_network | bool
+
+- name: Start MinIO container
+  community.docker.docker_container:
+    name: "{{ minio_container_name }}"
+    image: "{{ minio_container_image }}"
+    state: started
+    restart_policy: unless-stopped
+    networks:
+      - name: "{{ minio_docker_network }}"
+    ports:
+      - "{{ minio_api_port }}:9000"
+      - "{{ minio_console_port }}:9001"
+    env:
+      MINIO_ACCESS_KEY: "{{ minio_access_key }}"
+      MINIO_SECRET_KEY: "{{ minio_secret_key }}"
+    command: server /minio_data --console-address ":9001"
+    volumes:
+      - "{{ minio_data_path }}:/minio_data"
+    memory: "{{ minio_memory_limit }}"
+  when: minio_enable | bool
+
+- name: Wait for MinIO to be ready
+  wait_for:
+    host: localhost
+    port: "{{ minio_api_port }}"
+    timeout: 60
+  when: minio_enable | bool and minio_wait_for_ready | bool
diff --git a/playbooks/roles/minio_uninstall/tasks/main.yml b/playbooks/roles/minio_uninstall/tasks/main.yml
new file mode 100644
index 0000000..bea1543
--- /dev/null
+++ b/playbooks/roles/minio_uninstall/tasks/main.yml
@@ -0,0 +1,17 @@
+---
+- name: Import optional extra_args file
+  include_vars: "{{ item }}"
+  ignore_errors: yes
+  with_items:
+    - "../extra_vars.yaml"
+  tags: vars
+
+- name: Stop MinIO container
+  community.docker.docker_container:
+    name: "{{ minio_container_name }}"
+    state: stopped
+  ignore_errors: yes
+
+- name: Display MinIO uninstallation complete
+  debug:
+    msg: "MinIO container stopped"
diff --git a/playbooks/roles/minio_warp_run/tasks/main.yml b/playbooks/roles/minio_warp_run/tasks/main.yml
new file mode 100644
index 0000000..415d355
--- /dev/null
+++ b/playbooks/roles/minio_warp_run/tasks/main.yml
@@ -0,0 +1,249 @@
+---
+- name: Import optional extra_args file
+  include_vars: "{{ item }}"
+  ignore_errors: yes
+  with_items:
+    - "../extra_vars.yaml"
+  tags: vars
+
+- name: Create Warp results directory on remote host
+  file:
+    path: "/tmp/warp-results"
+    state: directory
+    mode: '0755'
+
+- name: Ensure local results directory exists with proper permissions
+  block:
+    - name: Create local results directory
+      file:
+        path: "{{ playbook_dir }}/../workflows/minio/results"
+        state: directory
+        mode: '0755'
+      delegate_to: localhost
+      run_once: true
+      become: no
+  rescue:
+    - name: Fix results directory permissions if needed
+      file:
+        path: "{{ playbook_dir }}/../workflows/minio/results"
+        state: directory
+        mode: '0755'
+        owner: "{{ lookup('env', 'USER') }}"
+        group: "{{ lookup('env', 'USER') }}"
+      delegate_to: localhost
+      run_once: true
+      become: yes
+
+
+- name: Wait for MinIO to be fully ready
+  wait_for:
+    host: localhost
+    port: "{{ minio_api_port }}"
+    timeout: 120
+  retries: 3
+  delay: 10
+
+- name: Check if Warp is installed
+  command: which warp
+  register: warp_check
+  failed_when: false
+  changed_when: false
+
+- name: Verify Warp installation
+  fail:
+    msg: "MinIO Warp is not installed. Please run 'make minio-install' first."
+  when: warp_check.rc != 0
+
+- name: Create Warp configuration file
+  template:
+    src: warp_config.json.j2
+    dest: "/tmp/warp_config.json"
+    mode: '0644'
+
+- name: Set MinIO endpoint URL
+  set_fact:
+    minio_endpoint: "localhost:{{ minio_api_port }}"
+
+- name: Display Warp version
+  command: warp --version
+  register: warp_version
+  changed_when: false
+
+- name: Show Warp version
+  debug:
+    msg: "MinIO Warp version: {{ warp_version.stdout }}"
+
+- name: Calculate benchmark timeout
+  set_fact:
+    # Parse duration and add 10 minutes buffer
+    benchmark_timeout: >-
+      {%- set duration_str = minio_warp_duration | string -%}
+      {%- if 's' in duration_str -%}
+        {{ (duration_str | replace('s','') | int) + 600 }}
+      {%- elif 'm' in duration_str -%}
+        {{ (duration_str | replace('m','') | int * 60) + 600 }}
+      {%- elif 'h' in duration_str -%}
+        {{ (duration_str | replace('h','') | int * 3600) + 600 }}
+      {%- else -%}
+        {{ 2400 }}
+      {%- endif -%}
+
+- name: Copy comprehensive benchmark script
+  copy:
+    src: "{{ playbook_dir }}/../workflows/minio/scripts/run_benchmark_suite.sh"
+    dest: "/tmp/run_benchmark_suite.sh"
+    mode: '0755'
+  when: minio_warp_run_comprehensive_suite | default(false)
+
+- name: Display benchmark configuration
+  debug:
+    msg: |
+      Comprehensive suite: {{ minio_warp_run_comprehensive_suite | default(false) }}
+      Duration: {{ minio_warp_duration }}
+      Timeout: {{ benchmark_timeout }} seconds
+  when: minio_warp_run_comprehensive_suite | default(false)
+
+- name: Run comprehensive benchmark suite
+  shell: |
+    set -x  # Enable debug output
+    echo "Starting comprehensive benchmark suite"
+    echo "Duration parameter: {{ minio_warp_duration }}"
+    /tmp/run_benchmark_suite.sh \
+      "{{ minio_endpoint }}" \
+      "{{ minio_access_key }}" \
+      "{{ minio_secret_key }}" \
+      "{{ minio_warp_duration }}"
+    EXIT_CODE=$?
+    echo "Benchmark suite completed with exit code: $EXIT_CODE"
+    exit $EXIT_CODE
+  args:
+    executable: /bin/bash
+  register: suite_output
+  when: minio_warp_run_comprehensive_suite | default(false)
+  async: "{{ benchmark_timeout | default(3600) | int }}"  # Use calculated timeout or 1 hour default
+  poll: 30
+
+- name: Display comprehensive suite output
+  debug:
+    msg: |
+      Suite completed: {{ suite_output is defined }}
+      Exit code: {{ suite_output.rc | default('N/A') }}
+      Output: {{ suite_output.stdout | default('No output') | truncate(500) }}
+  when: minio_warp_run_comprehensive_suite | default(false)
+
+- name: Debug - Show which path we're taking
+  debug:
+    msg: |
+      Comprehensive suite enabled: {{ minio_warp_run_comprehensive_suite | default(false) }}
+      Duration: {{ minio_warp_duration }}
+      Benchmark timeout: {{ benchmark_timeout }} seconds
+
+- name: Set timestamp for consistent filename
+  set_fact:
+    warp_timestamp: "{{ ansible_date_time.epoch }}"
+  when: not (minio_warp_run_comprehensive_suite | default(false))
+
+- name: Run MinIO Warp single benchmark with JSON output
+  shell: |
+    echo "=== Starting single benchmark ==="
+    echo "Duration: {{ minio_warp_duration }}"
+    echo "Full command:"
+    OUTPUT_FILE="/tmp/warp-results/warp_benchmark_{{ ansible_hostname }}_{{ warp_timestamp }}.json"
+
+    # Show the actual command being run
+    set -x
+    # IMPORTANT: --autoterm with --objects makes warp stop after N objects, ignoring --duration!
+    # For duration-based tests, do not use --autoterm
+    time warp {{ minio_warp_benchmark_type }} \
+      --host="{{ minio_endpoint }}" \
+      --access-key="{{ minio_access_key }}" \
+      --secret-key="{{ minio_secret_key }}" \
+      --bucket="{{ minio_warp_bucket_name }}" \
+      --duration="{{ minio_warp_duration }}" \
+      --concurrent="{{ minio_warp_concurrent_requests }}" \
+      --obj.size="{{ minio_warp_object_size }}" \
+      {% if minio_warp_enable_web_ui|default(false) %}--warp-client="{{ ansible_default_ipv4.address }}:{{ minio_warp_web_ui_port|default(7762) }}"{% endif %} \
+      --noclear \
+      --json > "$OUTPUT_FILE" 2>&1
+    RESULT=$?
+    set +x
+
+    echo "=== Benchmark completed with exit code: $RESULT ==="
+    echo "=== Output file size: $(ls -lh $OUTPUT_FILE 2>/dev/null | awk '{print \$5}') ==="
+
+    # Check if file was created
+    if [ -f "$OUTPUT_FILE" ]; then
+      echo "Results saved to: $OUTPUT_FILE"
+      ls -la "$OUTPUT_FILE"
+    else
+      echo "Warning: Results file not created"
+    fi
+    exit $RESULT
+  args:
+    executable: /bin/bash
+  environment:
+    WARP_ACCESS_KEY: "{{ minio_access_key }}"
+    WARP_SECRET_KEY: "{{ minio_secret_key }}"
+  register: warp_output
+  async: "{{ benchmark_timeout | int }}"
+  poll: 30
+  when: not (minio_warp_run_comprehensive_suite | default(false))
+
+- name: Display benchmark completion
+  debug:
+    msg: "MinIO Warp benchmark completed on {{ ansible_hostname }}"
+  when: (warp_output is defined and warp_output.rc | default(1) == 0) or (suite_output is defined and suite_output.rc | default(1) == 0)
+
+- name: Check if results file exists
+  stat:
+    path: "/tmp/warp-results/warp_benchmark_{{ ansible_hostname }}_{{ warp_timestamp }}.json"
+  register: results_file
+  when: warp_timestamp is defined
+
+- name: Display results file status
+  debug:
+    msg: "Results file exists: {{ results_file.stat.exists }}, Size: {{ results_file.stat.size | default(0) }} bytes"
+  when: results_file is defined
+
+- name: Copy results to local system
+  fetch:
+    src: "/tmp/warp-results/warp_benchmark_{{ ansible_hostname }}_{{ warp_timestamp }}.json"
+    dest: "{{ playbook_dir }}/../workflows/minio/results/"
+    flat: yes
+  become: no
+  when: results_file.stat.exists | default(false)
+
+- name: Generate graphs and HTML report
+  command: "python3 {{ playbook_dir }}/../workflows/minio/scripts/generate_warp_report.py {{ playbook_dir }}/../workflows/minio/results/"
+  delegate_to: localhost
+  run_once: true
+  become: no
+  when: results_file.stat.exists | default(false)
+  ignore_errors: yes
+
+- name: Save benchmark output as fallback
+  copy:
+    content: |
+      MinIO Warp Benchmark Results
+      ============================
+      Host: {{ ansible_hostname }}
+      Timestamp: {{ warp_timestamp | default('unknown') }}
+
+      Debug Output:
+      {{ warp_debug.stdout | default('No debug output') }}
+      {{ warp_debug.stderr | default('') }}
+
+      Full Benchmark Output:
+      {{ warp_output.stdout | default('No benchmark output - debug run failed') }}
+
+      Error Output (if any):
+      {{ warp_output.stderr | default('No errors') }}
+    dest: "/tmp/warp-results/warp_fallback_{{ ansible_hostname }}_{{ warp_timestamp | default(ansible_date_time.epoch) }}.txt"
+  when: warp_debug is defined
+
+- name: Copy fallback results
+  fetch:
+    src: "/tmp/warp-results/warp_fallback_{{ ansible_hostname }}_{{ warp_timestamp | default(ansible_date_time.epoch) }}.txt"
+    dest: "{{ playbook_dir }}/../workflows/minio/results/"
+    flat: yes
+  when: warp_debug is defined and not (results_file.stat.exists | default(false))
diff --git a/playbooks/roles/minio_warp_run/templates/warp_config.json.j2 b/playbooks/roles/minio_warp_run/templates/warp_config.json.j2
new file mode 100644
index 0000000..d5c4dc8
--- /dev/null
+++ b/playbooks/roles/minio_warp_run/templates/warp_config.json.j2
@@ -0,0 +1,14 @@
+{
+  "benchmark": "{{ minio_warp_benchmark_type }}",
+  "endpoint": "http://localhost:{{ minio_api_port }}",
+  "access_key": "{{ minio_access_key }}",
+  "secret_key": "{{ minio_secret_key }}",
+  "bucket": "{{ minio_warp_bucket_name }}",
+  "duration": "{{ minio_warp_duration }}",
+  "concurrent": {{ minio_warp_concurrent_requests }},
+  "object_size": "{{ minio_warp_object_size }}",
+  "objects": {{ minio_warp_objects_per_request }},
+  "auto_terminate": {{ minio_warp_auto_terminate | lower }},
+  "cleanup": {{ minio_warp_enable_cleanup | lower }},
+  "output_format": "{{ minio_warp_output_format }}"
+}
diff --git a/workflows/Makefile b/workflows/Makefile
index fe35707..ee90227 100644
--- a/workflows/Makefile
+++ b/workflows/Makefile
@@ -70,6 +70,10 @@ ifeq (y,$(CONFIG_KDEVOPS_WORKFLOW_ENABLE_AI))
 include workflows/ai/Makefile
 endif # CONFIG_KDEVOPS_WORKFLOW_ENABLE_AI == y
 
+ifeq (y,$(CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO))
+include workflows/minio/Makefile
+endif # CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO == y
+
 ANSIBLE_EXTRA_ARGS += $(WORKFLOW_ARGS)
 ANSIBLE_EXTRA_ARGS_SEPARATED += $(WORKFLOW_ARGS_SEPARATED)
 ANSIBLE_EXTRA_ARGS_DIRECT += $(WORKFLOW_ARGS_DIRECT)
diff --git a/workflows/minio/Kconfig b/workflows/minio/Kconfig
new file mode 100644
index 0000000..2af12fc
--- /dev/null
+++ b/workflows/minio/Kconfig
@@ -0,0 +1,23 @@
+if KDEVOPS_WORKFLOW_ENABLE_MINIO
+
+menu "MinIO S3 Storage Testing"
+
+config KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP
+	bool "Enable MinIO Warp benchmarking"
+	default y
+	help
+	Enable MinIO Warp for S3 storage benchmarking. Warp provides
+	comprehensive S3 API performance testing with multiple benchmark
+	types including GET, PUT, DELETE, LIST, and MULTIPART operations.
+
+if KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP
+
+source "workflows/minio/Kconfig.docker"
+source "workflows/minio/Kconfig.storage"
+source "workflows/minio/Kconfig.warp"
+
+endif # KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP
+
+endmenu
+
+endif # KDEVOPS_WORKFLOW_ENABLE_MINIO
diff --git a/workflows/minio/Kconfig.docker b/workflows/minio/Kconfig.docker
new file mode 100644
index 0000000..3a33719
--- /dev/null
+++ b/workflows/minio/Kconfig.docker
@@ -0,0 +1,66 @@
+config MINIO_CONTAINER_IMAGE_STRING
+	string "MinIO container image"
+	output yaml
+	default "minio/minio:RELEASE.2024-01-16T16-07-38Z"
+	help
+	The MinIO container image to use for S3 storage benchmarking.
+	Using a recent stable release with performance improvements.
+
+config MINIO_CONTAINER_NAME
+	string "The local MinIO container name"
+	default "minio-warp-server"
+	output yaml
+	help
+	Set the name for the MinIO Docker container.
+
+config MINIO_ACCESS_KEY
+	string "MinIO access key"
+	output yaml
+	default "minioadmin"
+	help
+	Access key for MinIO S3 API access.
+
+config MINIO_SECRET_KEY
+	string "MinIO secret key"
+	output yaml
+	default "minioadmin"
+	help
+	Secret key for MinIO S3 API access.
+
+config MINIO_DATA_PATH
+	string "Host path for MinIO data storage"
+	output yaml
+	default "/data/minio"
+	help
+	Directory on the host where MinIO data will be persisted.
+	If using dedicated storage, this will be the mount point.
+	Otherwise, uses the existing filesystem at this path.
+
+config MINIO_DOCKER_NETWORK_NAME
+	string "Docker network name"
+	output yaml
+	default "minio-warp-network"
+	help
+	Name of the Docker network to create for MinIO containers.
+
+config MINIO_API_PORT
+	int "MinIO API port"
+	output yaml
+	default "9000"
+	help
+	Port for MinIO S3 API access.
+
+config MINIO_CONSOLE_PORT
+	int "MinIO console port"
+	output yaml
+	default "9001"
+	help
+	Port for MinIO web console access.
+
+config MINIO_MEMORY_LIMIT
+	string "MinIO container memory limit"
+	output yaml
+	default "4g"
+	help
+	Memory limit for the MinIO container. Adjust based on
+	your system resources and workload requirements.
diff --git a/workflows/minio/Kconfig.storage b/workflows/minio/Kconfig.storage
new file mode 100644
index 0000000..8815912
--- /dev/null
+++ b/workflows/minio/Kconfig.storage
@@ -0,0 +1,364 @@
+menu "MinIO Storage Configuration"
+
+# CLI override support for WARP_DEVICE
+config MINIO_DEVICE_SET_BY_CLI
+	bool
+	output yaml
+	default $(shell, scripts/check-cli-set-var.sh WARP_DEVICE)
+
+config MINIO_STORAGE_ENABLE
+	bool "Enable dedicated MinIO storage device"
+	default y
+	output yaml
+	help
+	  Configure a dedicated storage device for MinIO data storage.
+	  This allows testing MinIO performance on different filesystems
+	  and configurations by creating and mounting a dedicated partition.
+
+	  When enabled, MinIO data will be stored on a dedicated device
+	  and filesystem optimized for S3 workloads.
+
+if MINIO_STORAGE_ENABLE
+
+config MINIO_DEVICE
+	string "Device to use for MinIO storage"
+	output yaml
+	default $(shell, ./scripts/append-makefile-vars.sh $(WARP_DEVICE)) if MINIO_DEVICE_SET_BY_CLI
+	default "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
+	default "/dev/disk/by-id/virtio-kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
+	default "/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
+	default "/dev/nvme2n1" if TERRAFORM_AWS_INSTANCE_M5AD_2XLARGE
+	default "/dev/nvme2n1" if TERRAFORM_AWS_INSTANCE_M5AD_4XLARGE
+	default "/dev/nvme1n1" if TERRAFORM_GCE
+	default "/dev/sdd" if TERRAFORM_AZURE
+	default TERRAFORM_OCI_SPARSE_VOLUME_DEVICE_FILE_NAME if TERRAFORM_OCI
+	help
+	  The device to use for MinIO storage. This device will be
+	  formatted and mounted to store MinIO S3 data.
+
+	  Can be overridden with WARP_DEVICE environment variable:
+	    make defconfig-minio-warp-xfs-16k WARP_DEVICE=/dev/nvme4n1
+
+config MINIO_MOUNT_POINT
+	string "Mount point for MinIO storage"
+	output yaml
+	default "/data/minio"
+	help
+	  The path where the MinIO storage filesystem will be mounted.
+	  MinIO will store all S3 data under this path.
+
+choice
+	prompt "MinIO storage filesystem"
+	default MINIO_FSTYPE_XFS
+
+config MINIO_FSTYPE_XFS
+	bool "XFS"
+	help
+	  Use XFS filesystem for MinIO storage. XFS provides excellent
+	  performance for large files and is recommended for production
+	  MinIO deployments. Supports various block sizes for testing
+	  large block size (LBS) configurations.
+
+config MINIO_FSTYPE_BTRFS
+	bool "Btrfs"
+	help
+	  Use Btrfs filesystem for MinIO storage. Btrfs provides
+	  advanced features like snapshots and compression, which can
+	  be beneficial for S3 storage management.
+
+config MINIO_FSTYPE_EXT4
+	bool "ext4"
+	help
+	  Use ext4 filesystem for MinIO storage. Ext4 is a mature
+	  and reliable filesystem with good all-around performance.
+
+config MINIO_FSTYPE_BCACHEFS
+	bool "bcachefs"
+	help
+	  Use bcachefs filesystem for MinIO storage. Bcachefs is a
+	  modern filesystem with advanced features like compression,
+	  encryption, and caching.
+
+endchoice
+
+config MINIO_FSTYPE
+	string
+	output yaml
+	default "xfs" if MINIO_FSTYPE_XFS
+	default "btrfs" if MINIO_FSTYPE_BTRFS
+	default "ext4" if MINIO_FSTYPE_EXT4
+	default "bcachefs" if MINIO_FSTYPE_BCACHEFS
+
+if MINIO_FSTYPE_XFS
+
+choice
+	prompt "XFS block size configuration"
+	default MINIO_XFS_BLOCKSIZE_4K
+
+config MINIO_XFS_BLOCKSIZE_4K
+	bool "4K block size (default)"
+	help
+	  Use 4K (4096 bytes) block size. This is the default and most
+	  compatible configuration.
+
+config MINIO_XFS_BLOCKSIZE_8K
+	bool "8K block size"
+	help
+	  Use 8K (8192 bytes) block size for improved performance with
+	  larger I/O operations.
+
+config MINIO_XFS_BLOCKSIZE_16K
+	bool "16K block size (LBS)"
+	help
+	  Use 16K (16384 bytes) block size. This is a large block size
+	  configuration that may require kernel LBS support.
+
+config MINIO_XFS_BLOCKSIZE_32K
+	bool "32K block size (LBS)"
+	help
+	  Use 32K (32768 bytes) block size. This is a large block size
+	  configuration that requires kernel LBS support.
+
+config MINIO_XFS_BLOCKSIZE_64K
+	bool "64K block size (LBS)"
+	help
+	  Use 64K (65536 bytes) block size. This is the maximum XFS block
+	  size and requires kernel LBS support.
+
+endchoice
+
+config MINIO_XFS_BLOCKSIZE
+	int
+	output yaml
+	default 4096 if MINIO_XFS_BLOCKSIZE_4K
+	default 8192 if MINIO_XFS_BLOCKSIZE_8K
+	default 16384 if MINIO_XFS_BLOCKSIZE_16K
+	default 32768 if MINIO_XFS_BLOCKSIZE_32K
+	default 65536 if MINIO_XFS_BLOCKSIZE_64K
+
+choice
+	prompt "XFS sector size"
+	default MINIO_XFS_SECTORSIZE_4K
+
+config MINIO_XFS_SECTORSIZE_4K
+	bool "4K sector size (default)"
+	help
+	  Use 4K (4096 bytes) sector size. This is the standard
+	  configuration for most modern drives.
+
+config MINIO_XFS_SECTORSIZE_512
+	bool "512 byte sector size"
+	depends on MINIO_XFS_BLOCKSIZE_4K
+	help
+	  Use legacy 512 byte sector size. Only available with 4K block size.
+
+config MINIO_XFS_SECTORSIZE_8K
+	bool "8K sector size"
+	depends on MINIO_XFS_BLOCKSIZE_8K || MINIO_XFS_BLOCKSIZE_16K || MINIO_XFS_BLOCKSIZE_32K || MINIO_XFS_BLOCKSIZE_64K
+	help
+	  Use 8K (8192 bytes) sector size. Requires block size >= 8K.
+
+config MINIO_XFS_SECTORSIZE_16K
+	bool "16K sector size (LBS)"
+	depends on MINIO_XFS_BLOCKSIZE_16K || MINIO_XFS_BLOCKSIZE_32K || MINIO_XFS_BLOCKSIZE_64K
+	help
+	  Use 16K (16384 bytes) sector size. Requires block size >= 16K
+	  and kernel LBS support.
+
+config MINIO_XFS_SECTORSIZE_32K
+	bool "32K sector size (LBS)"
+	depends on MINIO_XFS_BLOCKSIZE_32K || MINIO_XFS_BLOCKSIZE_64K
+	help
+	  Use 32K (32768 bytes) sector size. Requires block size >= 32K
+	  and kernel LBS support.
+
+endchoice
+
+config MINIO_XFS_SECTORSIZE
+	int
+	output yaml
+	default 512 if MINIO_XFS_SECTORSIZE_512
+	default 4096 if MINIO_XFS_SECTORSIZE_4K
+	default 8192 if MINIO_XFS_SECTORSIZE_8K
+	default 16384 if MINIO_XFS_SECTORSIZE_16K
+	default 32768 if MINIO_XFS_SECTORSIZE_32K
+
+config MINIO_XFS_MKFS_OPTS
+	string "Additional XFS mkfs options for MinIO storage"
+	output yaml
+	default ""
+	help
+	  Additional options to pass to mkfs.xfs when creating the MinIO
+	  storage filesystem. Block and sector sizes are configured above.
+
+endif # MINIO_FSTYPE_XFS
+
+config MINIO_BTRFS_MKFS_OPTS
+	string "Btrfs mkfs options for MinIO storage"
+	output yaml
+	default "-f"
+	depends on MINIO_FSTYPE_BTRFS
+	help
+	  Options to pass to mkfs.btrfs when creating the MinIO storage
+	  filesystem.
+
+config MINIO_EXT4_MKFS_OPTS
+	string "ext4 mkfs options for MinIO storage"
+	output yaml
+	default "-F"
+	depends on MINIO_FSTYPE_EXT4
+	help
+	  Options to pass to mkfs.ext4 when creating the MinIO storage
+	  filesystem.
+
+config MINIO_BCACHEFS_MKFS_OPTS
+	string "bcachefs mkfs options for MinIO storage"
+	output yaml
+	default "-f"
+	depends on MINIO_FSTYPE_BCACHEFS
+	help
+	  Options to pass to mkfs.bcachefs when creating the MinIO storage
+	  filesystem.
+
+endif # MINIO_STORAGE_ENABLE
+
+# Multi-filesystem configuration when not skipping bringup
+if !KDEVOPS_USE_DECLARED_HOSTS && MINIO_STORAGE_ENABLE
+
+config MINIO_ENABLE_MULTIFS_TESTING
+	bool "Enable multi-filesystem testing"
+	default n
+	output yaml
+	help
+	  Enable testing the same MinIO workload across multiple filesystem
+	  configurations. This allows comparing S3 performance characteristics
+	  between different filesystems and their configurations.
+
+	  When enabled, multiple nodes will be created with different
+	  filesystem configurations for comprehensive performance analysis.
+
+if MINIO_ENABLE_MULTIFS_TESTING
+
+config MINIO_MULTIFS_TEST_XFS
+	bool "Test XFS configurations"
+	default y
+	output yaml
+	help
+	  Enable testing MinIO workloads on XFS filesystem with different
+	  block size configurations.
+
+if MINIO_MULTIFS_TEST_XFS
+
+menu "XFS configuration profiles"
+
+config MINIO_MULTIFS_XFS_4K_4KS
+	bool "XFS 4k block size - 4k sector size"
+	default y
+	output yaml
+	help
+	  Test MinIO workloads on XFS with 4k filesystem block size
+	  and 4k sector size. This is the most common configuration
+	  and provides good performance for most S3 workloads.
+
+config MINIO_MULTIFS_XFS_16K_4KS
+	bool "XFS 16k block size - 4k sector size"
+	default n
+	output yaml
+	help
+	  Test MinIO workloads on XFS with 16k filesystem block size
+	  and 4k sector size. Larger block sizes can improve performance
+	  for large object storage patterns.
+
+config MINIO_MULTIFS_XFS_32K_4KS
+	bool "XFS 32k block size - 4k sector size"
+	default n
+	output yaml
+	help
+	  Test MinIO workloads on XFS with 32k filesystem block size
+	  and 4k sector size. Even larger block sizes can provide
+	  benefits for very large S3 objects.
+
+config MINIO_MULTIFS_XFS_64K_4KS
+	bool "XFS 64k block size - 4k sector size"
+	default n
+	output yaml
+	help
+	  Test MinIO workloads on XFS with 64k filesystem block size
+	  and 4k sector size. Maximum supported block size for XFS,
+	  optimized for very large object operations.
+
+endmenu
+
+endif # MINIO_MULTIFS_TEST_XFS
+
+config MINIO_MULTIFS_TEST_EXT4
+	bool "Test ext4 configurations"
+	default y
+	output yaml
+	help
+	  Enable testing MinIO workloads on ext4 filesystem with different
+	  configurations including bigalloc options.
+
+if MINIO_MULTIFS_TEST_EXT4
+
+menu "ext4 configuration profiles"
+
+config MINIO_MULTIFS_EXT4_4K
+	bool "ext4 4k block size"
+	default y
+	output yaml
+	help
+	  Test MinIO workloads on ext4 with standard 4k block size.
+	  This is the default ext4 configuration.
+
+config MINIO_MULTIFS_EXT4_16K_BIGALLOC
+	bool "ext4 16k bigalloc"
+	default n
+	output yaml
+	help
+	  Test MinIO workloads on ext4 with 16k bigalloc enabled.
+	  Bigalloc reduces metadata overhead and can improve
+	  performance for large S3 objects.
+
+endmenu
+
+endif # MINIO_MULTIFS_TEST_EXT4
+
+config MINIO_MULTIFS_TEST_BTRFS
+	bool "Test btrfs configurations"
+	default y
+	output yaml
+	help
+	  Enable testing MinIO workloads on btrfs filesystem with
+	  common default configuration profile.
+
+if MINIO_MULTIFS_TEST_BTRFS
+
+menu "btrfs configuration profiles"
+
+config MINIO_MULTIFS_BTRFS_DEFAULT
+	bool "btrfs default profile"
+	default y
+	output yaml
+	help
+	  Test MinIO workloads on btrfs with default configuration.
+	  This includes modern defaults with free-space-tree and
+	  no-holes features enabled.
+
+endmenu
+
+endif # MINIO_MULTIFS_TEST_BTRFS
+
+config MINIO_MULTIFS_RESULTS_DIR
+	string "Multi-filesystem results directory"
+	output yaml
+	default "/data/minio-multifs-benchmark"
+	help
+	  Directory where multi-filesystem test results and logs will be stored.
+	  Each filesystem configuration will have its own subdirectory.
+
+endif # MINIO_ENABLE_MULTIFS_TESTING
+
+endif # !SKIP_BRINGUP && MINIO_STORAGE_ENABLE
+
+endmenu
diff --git a/workflows/minio/Kconfig.warp b/workflows/minio/Kconfig.warp
new file mode 100644
index 0000000..6a8fdb9
--- /dev/null
+++ b/workflows/minio/Kconfig.warp
@@ -0,0 +1,141 @@
+menu "MinIO Warp benchmark configuration"
+
+config MINIO_WARP_RUN_COMPREHENSIVE_SUITE
+	bool "Run comprehensive benchmark suite"
+	default y
+	output yaml
+	help
+	  Run a complete suite of benchmarks including mixed, GET, PUT, DELETE,
+	  LIST operations with various object sizes and concurrency levels.
+	  This provides the most thorough performance analysis.
+
+if !MINIO_WARP_RUN_COMPREHENSIVE_SUITE
+
+choice
+	prompt "Warp benchmark type"
+	default MINIO_WARP_BENCHMARK_MIXED
+	help
+	  Select the primary benchmark type for MinIO Warp testing.
+
+config MINIO_WARP_BENCHMARK_MIXED
+	bool "Mixed workload (GET/PUT/DELETE)"
+	help
+	  Run a mixed workload benchmark combining GET, PUT, and DELETE operations
+	  to simulate realistic S3 usage patterns.
+
+config MINIO_WARP_BENCHMARK_GET
+	bool "GET operations (download)"
+	help
+	  Focus on download performance testing with GET operations.
+
+config MINIO_WARP_BENCHMARK_PUT
+	bool "PUT operations (upload)"
+	help
+	  Focus on upload performance testing with PUT operations.
+
+config MINIO_WARP_BENCHMARK_DELETE
+	bool "DELETE operations"
+	help
+	  Test object deletion performance.
+
+config MINIO_WARP_BENCHMARK_LIST
+	bool "LIST operations"
+	help
+	  Test bucket and object listing performance.
+
+config MINIO_WARP_BENCHMARK_MULTIPART
+	bool "MULTIPART upload"
+	help
+	  Test large file upload performance using multipart uploads.
+
+endchoice
+
+endif # !MINIO_WARP_RUN_COMPREHENSIVE_SUITE
+
+config MINIO_WARP_BENCHMARK_TYPE
+	string "Benchmark type to run"
+	output yaml
+	default "mixed"
+
+config MINIO_WARP_DURATION
+	string "Benchmark duration"
+	output yaml
+	default "5m"
+	help
+	  Duration for each benchmark run. Examples: 30s, 5m, 1h.
+	  Longer durations provide more stable results but take more time.
+
+config MINIO_WARP_CONCURRENT_REQUESTS
+	int "Concurrent requests"
+	output yaml
+	default 10
+	range 1 1000
+	help
+	  Number of concurrent requests to send to MinIO.
+	  Higher values increase load but may overwhelm the system.
+
+config MINIO_WARP_OBJECT_SIZE
+	string "Object size for testing"
+	output yaml
+	default "1MB"
+	help
+	  Size of objects to use in benchmarks. Examples: 1KB, 1MB, 10MB.
+	  Larger objects test throughput, smaller objects test IOPS.
+
+config MINIO_WARP_OBJECTS_PER_REQUEST
+	int "Objects per request"
+	output yaml
+	default 100
+	range 1 10000
+	help
+	  Number of objects to use in the benchmark.
+	  More objects provide better statistical accuracy.
+
+config MINIO_WARP_BUCKET_NAME
+	string "S3 bucket name for testing"
+	output yaml
+	default "warp-benchmark-bucket"
+	help
+	  Name of the S3 bucket to create and use for benchmarking.
+
+config MINIO_WARP_AUTO_TERMINATE
+	bool "Auto-terminate when results stabilize"
+	output yaml
+	default y
+	help
+	  Automatically terminate the benchmark when performance results
+	  have stabilized, potentially reducing test time.
+
+config MINIO_WARP_ENABLE_CLEANUP
+	bool "Clean up test data after benchmarks"
+	output yaml
+	default y
+	help
+	  Remove test objects and buckets after benchmarking completes.
+	  Disable if you want to inspect test data afterwards.
+
+config MINIO_WARP_OUTPUT_FORMAT
+	string "Output format"
+	output yaml
+	default "json"
+	help
+	  Output format for benchmark results. Options: json, csv, text.
+	  JSON format provides the most detailed metrics for analysis.
+
+config MINIO_WARP_ENABLE_WEB_UI
+	bool "Enable Warp Web UI for real-time monitoring"
+	output yaml
+	default n
+	help
+	  Enable the Warp web interface for real-time benchmark monitoring.
+	  Access the UI at http://localhost:7762 during benchmark runs.
+
+config MINIO_WARP_WEB_UI_PORT
+	int "Web UI port"
+	output yaml
+	default 7762
+	depends on MINIO_WARP_ENABLE_WEB_UI
+	help
+	  Port for the Warp web interface.
+
+endmenu
diff --git a/workflows/minio/Makefile b/workflows/minio/Makefile
new file mode 100644
index 0000000..c543ed3
--- /dev/null
+++ b/workflows/minio/Makefile
@@ -0,0 +1,76 @@
+MINIO_DATA_TARGET			:= minio
+MINIO_DATA_TARGET_INSTALL		:= minio-install
+MINIO_DATA_TARGET_UNINSTALL		:= minio-uninstall
+MINIO_DATA_TARGET_DESTROY		:= minio-destroy
+MINIO_DATA_TARGET_RUN			:= minio-warp
+MINIO_DATA_TARGET_RESULTS		:= minio-results
+
+MINIO_PLAYBOOK		:= playbooks/minio.yml
+
+HELP_TARGETS += minio-help
+
+$(MINIO_DATA_TARGET): $(ANSIBLE_INVENTORY_FILE)
+	$(Q)$(MAKE) $(MINIO_DATA_TARGET_INSTALL)
+
+$(MINIO_DATA_TARGET_INSTALL): $(ANSIBLE_INVENTORY_FILE)
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		-f 30 -i hosts $(MINIO_PLAYBOOK) \
+		--extra-vars=@$(KDEVOPS_EXTRA_VARS) \
+		--tags vars,minio_install
+
+$(MINIO_DATA_TARGET_UNINSTALL): $(ANSIBLE_INVENTORY_FILE)
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		-f 30 -i hosts $(MINIO_PLAYBOOK) \
+		--extra-vars=@$(KDEVOPS_EXTRA_VARS) \
+		--tags vars,minio_uninstall
+
+$(MINIO_DATA_TARGET_DESTROY): $(ANSIBLE_INVENTORY_FILE)
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		-f 30 -i hosts $(MINIO_PLAYBOOK) \
+		--extra-vars=@$(KDEVOPS_EXTRA_VARS) \
+		--tags vars,minio_destroy
+
+$(MINIO_DATA_TARGET_RUN): $(ANSIBLE_INVENTORY_FILE)
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		-f 30 -i hosts $(MINIO_PLAYBOOK) \
+		--extra-vars=@$(KDEVOPS_EXTRA_VARS) \
+		--tags vars,minio_warp
+
+$(MINIO_DATA_TARGET_RESULTS):
+	$(Q)if [ -d workflows/minio/results ]; then \
+		python3 workflows/minio/scripts/generate_warp_report.py workflows/minio/results/ && \
+		echo "" && \
+		echo "📊 MinIO Warp Analysis Complete!" && \
+		echo "Results available in workflows/minio/results/" && \
+		echo "  - warp_benchmark_report.html (open in browser)" && \
+		echo "  - PNG charts for performance visualization" && \
+		ls -lh workflows/minio/results/*.png 2>/dev/null | tail -5; \
+	else \
+		echo "No results directory found. Run 'make minio-warp' first."; \
+	fi
+
+minio-help:
+	@echo "MinIO Warp S3 benchmarking targets:"
+	@echo ""
+	@echo "minio                   - Install and setup MinIO server"
+	@echo "minio-install           - Install and setup MinIO server"
+	@echo "minio-uninstall         - Stop and remove MinIO containers"
+	@echo "minio-destroy           - Remove MinIO containers and clean up data"
+	@echo "minio-warp              - Run MinIO Warp benchmarks"
+	@echo "minio-results           - Collect and analyze benchmark results"
+	@echo ""
+	@echo "Example usage:"
+	@echo "  make defconfig-minio-warp    # Configure for Warp benchmarking"
+	@echo "  make bringup                 # Setup test nodes"
+	@echo "  make minio                   # Install MinIO server"
+	@echo "  make minio-warp              # Run benchmarks"
+	@echo "  make minio-results           # Generate analysis and visualizations"
+	@echo ""
+	@echo "Visualization options:"
+	@echo "  - Enable MINIO_WARP_ENABLE_WEB_UI in menuconfig for real-time monitoring"
+	@echo "  - Access web UI at http://node-ip:7762 during benchmarks"
+	@echo "  - View HTML report: workflows/minio/results/warp_benchmark_report.html"
+
+.PHONY: $(MINIO_DATA_TARGET) $(MINIO_DATA_TARGET_INSTALL) $(MINIO_DATA_TARGET_UNINSTALL)
+.PHONY: $(MINIO_DATA_TARGET_DESTROY) $(MINIO_DATA_TARGET_RUN) $(MINIO_DATA_TARGET_RESULTS)
+.PHONY: minio-help
diff --git a/workflows/minio/scripts/analyze_warp_results.py b/workflows/minio/scripts/analyze_warp_results.py
new file mode 100755
index 0000000..c20c57d
--- /dev/null
+++ b/workflows/minio/scripts/analyze_warp_results.py
@@ -0,0 +1,858 @@
+#!/usr/bin/env python3
+"""
+Analyze MinIO Warp benchmark results and generate reports with visualizations.
+"""
+
+import json
+import glob
+import os
+import sys
+from pathlib import Path
+from datetime import datetime
+import matplotlib.pyplot as plt
+import matplotlib.patches as mpatches
+import numpy as np
+from typing import Dict, List, Any
+
+
+def load_warp_results(results_dir: Path) -> List[Dict[str, Any]]:
+    """Load all Warp JSON result files from the results directory."""
+    results = []
+    json_files = list(results_dir.glob("warp_benchmark_*.json"))
+
+    for json_file in sorted(json_files):
+        try:
+            with open(json_file, "r") as f:
+                content = f.read()
+                # Find where the JSON starts (after any terminal output)
+                json_start = content.find("{")
+                if json_start >= 0:
+                    json_content = content[json_start:]
+                    data = json.loads(json_content)
+                    data["_filename"] = json_file.name
+                    data["_filepath"] = str(json_file)
+                    results.append(data)
+                    print(f"Loaded: {json_file.name}")
+                else:
+                    print(f"No JSON found in {json_file}")
+        except Exception as e:
+            print(f"Error loading {json_file}: {e}")
+
+    return results
+
+
+def extract_metrics(result: Dict[str, Any]) -> Dict[str, Any]:
+    """Extract key metrics from a Warp result."""
+    metrics = {
+        "filename": result.get("_filename", "unknown"),
+        "timestamp": "",
+        "operation": "mixed",
+    }
+
+    # Check if we have the total stats
+    if "total" in result:
+        total = result["total"]
+
+        # Extract basic info
+        metrics["timestamp"] = total.get("start_time", "")
+        metrics["total_requests"] = total.get("total_requests", 0)
+        metrics["total_objects"] = total.get("total_objects", 0)
+        metrics["total_errors"] = total.get("total_errors", 0)
+        metrics["total_bytes"] = total.get("total_bytes", 0)
+        metrics["concurrency"] = total.get("concurrency", 0)
+
+        # Calculate duration in seconds
+        start_time = total.get("start_time", "")
+        end_time = total.get("end_time", "")
+        if start_time and end_time:
+            try:
+                from dateutil import parser
+
+                start = parser.parse(start_time)
+                end = parser.parse(end_time)
+                duration = (end - start).total_seconds()
+                metrics["duration_seconds"] = duration
+            except ImportError:
+                # Fall back to simple parsing if dateutil not available
+                metrics["duration_seconds"] = 105  # Default for 5m test
+
+        # Get throughput if directly available
+        if "throughput" in total and isinstance(total["throughput"], dict):
+            # Throughput is a complex structure with segmented data
+            tp = total["throughput"]
+            if "bytes" in tp:
+                bytes_total = tp["bytes"]
+                duration_ms = tp.get("measure_duration_millis", 1000)
+                duration_s = duration_ms / 1000
+                if duration_s > 0:
+                    metrics["throughput_avg_mbps"] = (
+                        bytes_total / (1024 * 1024)
+                    ) / duration_s
+            elif "segmented" in tp:
+                # Use median throughput
+                metrics["throughput_avg_mbps"] = tp["segmented"].get(
+                    "median_bps", 0
+                ) / (1024 * 1024)
+        elif metrics.get("duration_seconds", 0) > 0 and metrics["total_bytes"] > 0:
+            # Calculate throughput from bytes and duration
+            metrics["throughput_avg_mbps"] = (
+                metrics["total_bytes"] / (1024 * 1024)
+            ) / metrics["duration_seconds"]
+
+        # Calculate operations per second
+        if metrics.get("duration_seconds", 0) > 0:
+            metrics["ops_per_second"] = (
+                metrics["total_requests"] / metrics["duration_seconds"]
+            )
+
+    # Check for operations breakdown by type
+    if "by_op_type" in result:
+        ops = result["by_op_type"]
+
+        # Process each operation type
+        for op_type in ["GET", "PUT", "DELETE", "STAT"]:
+            if op_type in ops:
+                op_data = ops[op_type]
+                op_lower = op_type.lower()
+
+                # Extract operation count
+                if "ops" in op_data:
+                    metrics[f"{op_lower}_requests"] = op_data["ops"]
+
+                # Extract average duration
+                if "avg_duration" in op_data:
+                    metrics[f"{op_lower}_latency_avg_ms"] = (
+                        op_data["avg_duration"] / 1e6
+                    )
+
+                # Extract percentiles if available
+                if "percentiles_millis" in op_data:
+                    percentiles = op_data["percentiles_millis"]
+                    if "50" in percentiles:
+                        metrics[f"{op_lower}_latency_p50"] = percentiles["50"]
+                    if "90" in percentiles:
+                        metrics[f"{op_lower}_latency_p90"] = percentiles["90"]
+                    if "99" in percentiles:
+                        metrics[f"{op_lower}_latency_p99"] = percentiles["99"]
+
+                # Extract min/max if available
+                if "fastest_millis" in op_data:
+                    metrics[f"{op_lower}_latency_min"] = op_data["fastest_millis"]
+                if "slowest_millis" in op_data:
+                    metrics[f"{op_lower}_latency_max"] = op_data["slowest_millis"]
+
+        # Calculate aggregate latency metrics
+        latencies = []
+        for op in ["get", "put", "delete"]:
+            if f"{op}_latency_avg_ms" in metrics:
+                latencies.append(metrics[f"{op}_latency_avg_ms"])
+        if latencies:
+            metrics["latency_avg_ms"] = sum(latencies) / len(latencies)
+
+    # Extract from summary if present
+    if "summary" in result:
+        summary = result["summary"]
+        if "throughput_MiBs" in summary:
+            metrics["throughput_avg_mbps"] = summary["throughput_MiBs"]
+        if "ops_per_sec" in summary:
+            metrics["ops_per_second"] = summary["ops_per_sec"]
+
+    return metrics
+
+
+def generate_throughput_chart(metrics_list: List[Dict[str, Any]], output_dir: Path):
+    """Generate throughput comparison chart."""
+    if not metrics_list:
+        return
+
+    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))
+
+    # Throughput bar chart
+    labels = [
+        m["filename"].replace("warp_benchmark_", "").replace(".json", "")[:20]
+        for m in metrics_list
+    ]
+    x = np.arange(len(labels))
+
+    avg_throughput = [m.get("throughput_avg_mbps", 0) for m in metrics_list]
+
+    width = 0.35
+    ax1.bar(x, avg_throughput, width, label="Throughput", color="skyblue")
+
+    ax1.set_xlabel("Test Run")
+    ax1.set_ylabel("Throughput (MB/s)")
+    ax1.set_title("MinIO Warp Throughput Performance")
+    ax1.set_xticks(x)
+    ax1.set_xticklabels(labels, rotation=45, ha="right")
+    ax1.legend()
+    ax1.grid(True, alpha=0.3)
+
+    # Operations per second
+    ops_per_sec = [m.get("ops_per_second", 0) for m in metrics_list]
+    ax2.bar(x, ops_per_sec, color="orange")
+    ax2.set_xlabel("Test Run")
+    ax2.set_ylabel("Operations/Second")
+    ax2.set_title("Operations Per Second")
+    ax2.set_xticks(x)
+    ax2.set_xticklabels(labels, rotation=45, ha="right")
+    ax2.grid(True, alpha=0.3)
+
+    plt.tight_layout()
+    output_file = output_dir / "warp_throughput_performance.png"
+    plt.savefig(output_file, dpi=150, bbox_inches="tight")
+    plt.close()
+    print(f"Generated: {output_file}")
+
+
+def generate_latency_chart(metrics_list: List[Dict[str, Any]], output_dir: Path):
+    """Generate latency comparison chart."""
+    if not metrics_list:
+        return
+
+    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))
+
+    labels = [
+        m["filename"].replace("warp_benchmark_", "").replace(".json", "")[:20]
+        for m in metrics_list
+    ]
+    x = np.arange(len(labels))
+
+    # Collect latency data by operation type
+    operations = ["get", "put", "delete"]
+    colors = {"get": "steelblue", "put": "orange", "delete": "red"}
+
+    # Operation-specific latencies
+    width = 0.2
+    offset = -width
+    for op in operations:
+        op_latencies = []
+        for m in metrics_list:
+            # Try operation-specific latency first, then fall back to general
+            lat = m.get(f"{op}_latency_avg_ms", m.get("latency_avg_ms", 0))
+            op_latencies.append(lat)
+
+        if any(l > 0 for l in op_latencies):
+            ax1.bar(x + offset, op_latencies, width, label=op.upper(), color=colors[op])
+            offset += width
+
+    ax1.set_xlabel("Test Run")
+    ax1.set_ylabel("Latency (ms)")
+    ax1.set_title("Request Latency Distribution")
+    ax1.set_xticks(x)
+    ax1.set_xticklabels(labels, rotation=45, ha="right")
+    ax1.legend()
+    ax1.grid(True, alpha=0.3)
+
+    # Min/Max latency range
+    lat_min = [m.get("latency_min_ms", 0) for m in metrics_list]
+    lat_max = [m.get("latency_max_ms", 0) for m in metrics_list]
+
+    ax2.bar(x - width / 2, lat_min, width, label="Min", color="green")
+    ax2.bar(x + width / 2, lat_max, width, label="Max", color="red")
+
+    ax2.set_xlabel("Test Run")
+    ax2.set_ylabel("Latency (ms)")
+    ax2.set_title("Latency Range (Min/Max)")
+    ax2.set_xticks(x)
+    ax2.set_xticklabels(labels, rotation=45, ha="right")
+    ax2.legend()
+    ax2.grid(True, alpha=0.3)
+
+    plt.tight_layout()
+    output_file = output_dir / "warp_latency_analysis.png"
+    plt.savefig(output_file, dpi=150, bbox_inches="tight")
+    plt.close()
+    print(f"Generated: {output_file}")
+
+
+def generate_performance_summary_chart(
+    metrics_list: List[Dict[str, Any]], output_dir: Path
+):
+    """Generate a comprehensive performance summary chart."""
+    if not metrics_list:
+        return
+
+    fig = plt.figure(figsize=(16, 10))
+    gs = fig.add_gridspec(3, 2, hspace=0.3, wspace=0.25)
+
+    # Throughput over time
+    ax1 = fig.add_subplot(gs[0, :])
+    timestamps = []
+    throughputs = []
+    for m in metrics_list:
+        try:
+            if m.get("timestamp"):
+                timestamps.append(
+                    datetime.fromisoformat(m["timestamp"].replace("Z", "+00:00"))
+                )
+                throughputs.append(m.get("throughput_avg_mbps", 0))
+        except:
+            pass
+
+    if timestamps:
+        ax1.plot(timestamps, throughputs, "o-", linewidth=2, markersize=8, color="blue")
+        ax1.set_xlabel("Time")
+        ax1.set_ylabel("Throughput (MB/s)")
+        ax1.set_title("Throughput Over Time", fontsize=14, fontweight="bold")
+        ax1.grid(True, alpha=0.3)
+        ax1.tick_params(axis="x", rotation=45)
+
+    # Operations distribution
+    ax2 = fig.add_subplot(gs[1, 0])
+    ops_data = [m.get("ops_per_second", 0) for m in metrics_list]
+    if ops_data:
+        ax2.hist(ops_data, bins=10, color="orange", edgecolor="black", alpha=0.7)
+        ax2.set_xlabel("Operations/Second")
+        ax2.set_ylabel("Frequency")
+        ax2.set_title("Operations Distribution", fontsize=12, fontweight="bold")
+        ax2.grid(True, alpha=0.3)
+
+    # Latency box plot
+    ax3 = fig.add_subplot(gs[1, 1])
+    latency_data = []
+    for m in metrics_list:
+        lat_data = []
+        if m.get("latency_avg_ms"):
+            lat_data.extend(
+                [
+                    m.get("latency_min_ms", 0),
+                    m.get("latency_percentile_50", 0),
+                    m.get("latency_avg_ms", 0),
+                    m.get("latency_percentile_99", 0),
+                    m.get("latency_max_ms", 0),
+                ]
+            )
+        if lat_data:
+            latency_data.append(lat_data)
+
+    if latency_data:
+        ax3.boxplot(latency_data)
+        ax3.set_xlabel("Test Run")
+        ax3.set_ylabel("Latency (ms)")
+        ax3.set_title("Latency Distribution", fontsize=12, fontweight="bold")
+        ax3.grid(True, alpha=0.3)
+
+    # Performance metrics table
+    ax4 = fig.add_subplot(gs[2, :])
+    ax4.axis("tight")
+    ax4.axis("off")
+
+    # Create summary statistics
+    if metrics_list:
+        avg_metrics = metrics_list[-1]  # Use most recent for now
+        table_data = [
+            ["Metric", "Value"],
+            [
+                "Average Throughput",
+                f"{avg_metrics.get('throughput_avg_mbps', 0):.2f} MB/s",
+            ],
+            ["Operations/Second", f"{avg_metrics.get('ops_per_second', 0):.0f}"],
+            ["Average Latency", f"{avg_metrics.get('latency_avg_ms', 0):.2f} ms"],
+            ["P99 Latency", f"{avg_metrics.get('latency_percentile_99', 0):.2f} ms"],
+            ["Total Operations", f"{avg_metrics.get('ops_total', 0):.0f}"],
+            ["Object Size", str(avg_metrics.get("object_size", "unknown"))],
+            ["Error Rate", f"{avg_metrics.get('error_rate', 0):.2%}"],
+        ]
+
+        table = ax4.table(
+            cellText=table_data, cellLoc="left", loc="center", colWidths=[0.3, 0.3]
+        )
+        table.auto_set_font_size(False)
+        table.set_fontsize(10)
+        table.scale(1, 1.5)
+
+        # Style the header row
+        for i in range(2):
+            table[(0, i)].set_facecolor("#40466e")
+            table[(0, i)].set_text_props(weight="bold", color="white")
+
+    plt.suptitle("MinIO Warp Performance Summary", fontsize=16, fontweight="bold")
+
+    output_file = output_dir / "warp_performance_summary.png"
+    plt.savefig(output_file, dpi=150, bbox_inches="tight")
+    plt.close()
+    print(f"Generated: {output_file}")
+
+
+def generate_text_report(metrics_list: List[Dict[str, Any]], output_dir: Path):
+    """Generate a detailed text report."""
+    output_file = output_dir / "warp_analysis_report.txt"
+
+    with open(output_file, "w") as f:
+        f.write("=" * 80 + "\n")
+        f.write("MinIO Warp Benchmark Analysis Report\n")
+        f.write("=" * 80 + "\n\n")
+        f.write(f"Generated: {datetime.now().isoformat()}\n")
+        f.write(f"Total test runs analyzed: {len(metrics_list)}\n\n")
+
+        if not metrics_list:
+            f.write("No benchmark results found.\n")
+            return
+
+        # Overall statistics
+        f.write("OVERALL PERFORMANCE STATISTICS\n")
+        f.write("-" * 40 + "\n")
+
+        throughputs = [
+            m.get("throughput_avg_mbps", 0)
+            for m in metrics_list
+            if m.get("throughput_avg_mbps")
+        ]
+        if throughputs:
+            f.write(f"Throughput:\n")
+            f.write(f"  Average: {np.mean(throughputs):.2f} MB/s\n")
+            f.write(f"  Median:  {np.median(throughputs):.2f} MB/s\n")
+            f.write(f"  Min:     {np.min(throughputs):.2f} MB/s\n")
+            f.write(f"  Max:     {np.max(throughputs):.2f} MB/s\n")
+            f.write(f"  StdDev:  {np.std(throughputs):.2f} MB/s\n\n")
+
+        ops_rates = [
+            m.get("ops_per_second", 0) for m in metrics_list if m.get("ops_per_second")
+        ]
+        if ops_rates:
+            f.write(f"Operations per Second:\n")
+            f.write(f"  Average: {np.mean(ops_rates):.0f} ops/s\n")
+            f.write(f"  Median:  {np.median(ops_rates):.0f} ops/s\n")
+            f.write(f"  Min:     {np.min(ops_rates):.0f} ops/s\n")
+            f.write(f"  Max:     {np.max(ops_rates):.0f} ops/s\n\n")
+
+        latencies = [
+            m.get("latency_avg_ms", 0) for m in metrics_list if m.get("latency_avg_ms")
+        ]
+        if latencies:
+            f.write(f"Average Latency:\n")
+            f.write(f"  Mean:    {np.mean(latencies):.2f} ms\n")
+            f.write(f"  Median:  {np.median(latencies):.2f} ms\n")
+            f.write(f"  Min:     {np.min(latencies):.2f} ms\n")
+            f.write(f"  Max:     {np.max(latencies):.2f} ms\n\n")
+
+        # Individual test run details
+        f.write("=" * 80 + "\n")
+        f.write("INDIVIDUAL TEST RUN DETAILS\n")
+        f.write("=" * 80 + "\n\n")
+
+        for i, metrics in enumerate(metrics_list, 1):
+            f.write(f"Test Run #{i}\n")
+            f.write("-" * 40 + "\n")
+            f.write(f"File: {metrics.get('filename', 'unknown')}\n")
+            f.write(f"Timestamp: {metrics.get('timestamp', 'N/A')}\n")
+            f.write(f"Operation: {metrics.get('operation', 'unknown')}\n")
+            f.write(f"Duration: {metrics.get('duration_seconds', 0):.1f} seconds\n")
+            f.write(f"Object Size: {metrics.get('object_size', 'unknown')}\n")
+            f.write(f"Total Objects: {metrics.get('objects_total', 0)}\n")
+
+            if metrics.get("throughput_avg_mbps"):
+                f.write(f"\nThroughput Performance:\n")
+                f.write(
+                    f"  Average: {metrics.get('throughput_avg_mbps', 0):.2f} MB/s\n"
+                )
+                f.write(
+                    f"  Min:     {metrics.get('throughput_min_mbps', 0):.2f} MB/s\n"
+                )
+                f.write(
+                    f"  Max:     {metrics.get('throughput_max_mbps', 0):.2f} MB/s\n"
+                )
+                f.write(
+                    f"  P50:     {metrics.get('throughput_percentile_50', 0):.2f} MB/s\n"
+                )
+                f.write(
+                    f"  P99:     {metrics.get('throughput_percentile_99', 0):.2f} MB/s\n"
+                )
+
+            if metrics.get("ops_per_second"):
+                f.write(f"\nOperations Performance:\n")
+                f.write(f"  Total Operations: {metrics.get('ops_total', 0):.0f}\n")
+                f.write(
+                    f"  Operations/Second: {metrics.get('ops_per_second', 0):.0f}\n"
+                )
+                f.write(
+                    f"  Avg Duration: {metrics.get('ops_avg_duration_ms', 0):.2f} ms\n"
+                )
+
+            if metrics.get("latency_avg_ms"):
+                f.write(f"\nLatency Metrics:\n")
+                f.write(f"  Average: {metrics.get('latency_avg_ms', 0):.2f} ms\n")
+                f.write(f"  Min:     {metrics.get('latency_min_ms', 0):.2f} ms\n")
+                f.write(f"  Max:     {metrics.get('latency_max_ms', 0):.2f} ms\n")
+                f.write(
+                    f"  P50:     {metrics.get('latency_percentile_50', 0):.2f} ms\n"
+                )
+                f.write(
+                    f"  P99:     {metrics.get('latency_percentile_99', 0):.2f} ms\n"
+                )
+
+            if metrics.get("error_count", 0) > 0:
+                f.write(f"\nErrors:\n")
+                f.write(f"  Error Count: {metrics.get('error_count', 0)}\n")
+                f.write(f"  Error Rate: {metrics.get('error_rate', 0):.2%}\n")
+
+            f.write("\n")
+
+        f.write("=" * 80 + "\n")
+        f.write("END OF REPORT\n")
+        f.write("=" * 80 + "\n")
+
+    print(f"Generated: {output_file}")
+
+
+def generate_html_report(metrics_list: List[Dict[str, Any]], output_dir: Path):
+    """Generate a comprehensive HTML report with embedded visualizations."""
+    output_file = output_dir / "warp_benchmark_report.html"
+
+    # Check if PNG files exist
+    throughput_png = output_dir / "warp_throughput_performance.png"
+    latency_png = output_dir / "warp_latency_analysis.png"
+    summary_png = output_dir / "warp_performance_summary.png"
+
+    with open(output_file, "w") as f:
+        f.write(
+            """<!DOCTYPE html>
+<html>
+<head>
+    <meta charset="UTF-8">
+    <title>MinIO Warp Benchmark Report</title>
+    <style>
+        body {
+            font-family: 'Segoe UI', Arial, sans-serif;
+            max-width: 1400px;
+            margin: 0 auto;
+            padding: 20px;
+            background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
+            min-height: 100vh;
+        }
+        .container {
+            background: white;
+            border-radius: 15px;
+            box-shadow: 0 20px 60px rgba(0,0,0,0.3);
+            padding: 40px;
+        }
+        h1 {
+            color: #2c3e50;
+            text-align: center;
+            font-size: 2.5em;
+            margin-bottom: 10px;
+            text-shadow: 2px 2px 4px rgba(0,0,0,0.1);
+        }
+        .subtitle {
+            text-align: center;
+            color: #7f8c8d;
+            margin-bottom: 30px;
+        }
+        .summary-grid {
+            display: grid;
+            grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
+            gap: 20px;
+            margin: 30px 0;
+        }
+        .metric-card {
+            background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
+            color: white;
+            padding: 20px;
+            border-radius: 10px;
+            text-align: center;
+            box-shadow: 0 5px 15px rgba(0,0,0,0.2);
+        }
+        .metric-value {
+            font-size: 2em;
+            font-weight: bold;
+            margin: 10px 0;
+        }
+        .metric-label {
+            font-size: 0.9em;
+            opacity: 0.9;
+        }
+        .section {
+            margin: 40px 0;
+        }
+        .section h2 {
+            color: #34495e;
+            border-bottom: 2px solid #667eea;
+            padding-bottom: 10px;
+            margin-bottom: 20px;
+        }
+        table {
+            width: 100%;
+            border-collapse: collapse;
+            margin: 20px 0;
+        }
+        th {
+            background: #667eea;
+            color: white;
+            padding: 12px;
+            text-align: left;
+        }
+        td {
+            padding: 10px;
+            border-bottom: 1px solid #ecf0f1;
+        }
+        tr:hover {
+            background: #f8f9fa;
+        }
+        .chart-container {
+            text-align: center;
+            margin: 30px 0;
+        }
+        .chart-container img {
+            max-width: 100%;
+            height: auto;
+            border-radius: 10px;
+            box-shadow: 0 5px 15px rgba(0,0,0,0.1);
+        }
+        .performance-good {
+            color: #27ae60;
+            font-weight: bold;
+        }
+        .performance-warning {
+            color: #f39c12;
+            font-weight: bold;
+        }
+        .performance-bad {
+            color: #e74c3c;
+            font-weight: bold;
+        }
+        .footer {
+            text-align: center;
+            margin-top: 40px;
+            padding-top: 20px;
+            border-top: 1px solid #ecf0f1;
+            color: #7f8c8d;
+        }
+    </style>
+</head>
+<body>
+    <div class="container">
+        <h1>🚀 MinIO Warp Benchmark Report</h1>
+        <div class="subtitle">Generated on """
+            + datetime.now().strftime("%Y-%m-%d %H:%M:%S")
+            + """</div>
+
+        <div class="section">
+            <h2>📊 Performance Summary</h2>
+            <div class="summary-grid">
+"""
+        )
+
+        if metrics_list:
+            # Calculate summary statistics
+            throughputs = [
+                m.get("throughput_avg_mbps", 0)
+                for m in metrics_list
+                if m.get("throughput_avg_mbps")
+            ]
+            ops_rates = [
+                m.get("ops_per_second", 0)
+                for m in metrics_list
+                if m.get("ops_per_second")
+            ]
+            latencies = [
+                m.get("latency_avg_ms", 0)
+                for m in metrics_list
+                if m.get("latency_avg_ms")
+            ]
+
+            if throughputs:
+                avg_throughput = np.mean(throughputs)
+                f.write(
+                    f"""
+                <div class="metric-card">
+                    <div class="metric-label">Average Throughput</div>
+                    <div class="metric-value">{avg_throughput:.1f}</div>
+                    <div class="metric-label">MB/s</div>
+                </div>
+                """
+                )
+
+                f.write(
+                    f"""
+                <div class="metric-card">
+                    <div class="metric-label">Peak Throughput</div>
+                    <div class="metric-value">{np.max(throughputs):.1f}</div>
+                    <div class="metric-label">MB/s</div>
+                </div>
+                """
+                )
+
+            if ops_rates:
+                f.write(
+                    f"""
+                <div class="metric-card">
+                    <div class="metric-label">Avg Operations</div>
+                    <div class="metric-value">{np.mean(ops_rates):.0f}</div>
+                    <div class="metric-label">ops/second</div>
+                </div>
+                """
+                )
+
+            if latencies:
+                f.write(
+                    f"""
+                <div class="metric-card">
+                    <div class="metric-label">Avg Latency</div>
+                    <div class="metric-value">{np.mean(latencies):.1f}</div>
+                    <div class="metric-label">ms</div>
+                </div>
+                """
+                )
+
+            f.write(
+                f"""
+                <div class="metric-card">
+                    <div class="metric-label">Test Runs</div>
+                    <div class="metric-value">{len(metrics_list)}</div>
+                    <div class="metric-label">completed</div>
+                </div>
+            """
+            )
+
+        f.write(
+            """
+            </div>
+        </div>
+
+        <div class="section">
+            <h2>📈 Performance Visualizations</h2>
+        """
+        )
+
+        if throughput_png.exists():
+            f.write(
+                f"""
+            <div class="chart-container">
+                <img src="{throughput_png.name}" alt="Throughput Performance">
+            </div>
+            """
+            )
+
+        if latency_png.exists():
+            f.write(
+                f"""
+            <div class="chart-container">
+                <img src="{latency_png.name}" alt="Latency Analysis">
+            </div>
+            """
+            )
+
+        if summary_png.exists():
+            f.write(
+                f"""
+            <div class="chart-container">
+                <img src="{summary_png.name}" alt="Performance Summary">
+            </div>
+            """
+            )
+
+        f.write(
+            """
+        <div class="section">
+            <h2>📋 Detailed Results</h2>
+            <table>
+                <thead>
+                    <tr>
+                        <th>Test Run</th>
+                        <th>Operation</th>
+                        <th>Throughput (MB/s)</th>
+                        <th>Ops/Second</th>
+                        <th>Avg Latency (ms)</th>
+                        <th>P99 Latency (ms)</th>
+                        <th>Errors</th>
+                    </tr>
+                </thead>
+                <tbody>
+        """
+        )
+
+        for metrics in metrics_list:
+            throughput = metrics.get("throughput_avg_mbps", 0)
+            ops_sec = metrics.get("ops_per_second", 0)
+            latency = metrics.get("latency_avg_ms", 0)
+            p99_lat = metrics.get("latency_percentile_99", 0)
+            errors = metrics.get("error_count", 0)
+
+            # Color code based on performance
+            throughput_class = (
+                "performance-good"
+                if throughput > 100
+                else "performance-warning" if throughput > 50 else "performance-bad"
+            )
+            latency_class = (
+                "performance-good"
+                if latency < 10
+                else "performance-warning" if latency < 50 else "performance-bad"
+            )
+
+            f.write(
+                f"""
+                <tr>
+                    <td>{metrics.get('filename', 'unknown').replace('warp_benchmark_', '').replace('.json', '')}</td>
+                    <td>{metrics.get('operation', 'mixed')}</td>
+                    <td class="{throughput_class}">{throughput:.2f}</td>
+                    <td>{ops_sec:.0f}</td>
+                    <td class="{latency_class}">{latency:.2f}</td>
+                    <td>{p99_lat:.2f}</td>
+                    <td>{errors}</td>
+                </tr>
+            """
+            )
+
+        f.write(
+            """
+                </tbody>
+            </table>
+        </div>
+
+        <div class="footer">
+            <p>MinIO Warp Benchmark Analysis | Generated by kdevops</p>
+        </div>
+    </div>
+</body>
+</html>
+        """
+        )
+
+    print(f"Generated: {output_file}")
+
+
+def main():
+    """Main analysis function."""
+    # Determine results directory
+    script_dir = Path(__file__).parent
+    results_dir = script_dir.parent / "results"
+
+    if not results_dir.exists():
+        print(f"Results directory not found: {results_dir}")
+        return 1
+
+    # Load all results
+    results = load_warp_results(results_dir)
+    if not results:
+        print("No Warp benchmark results found.")
+        return 1
+
+    print(f"\nFound {len(results)} benchmark result(s)")
+
+    # Extract metrics from each result
+    metrics_list = [extract_metrics(result) for result in results]
+
+    # Generate visualizations
+    print("\nGenerating visualizations...")
+    generate_throughput_chart(metrics_list, results_dir)
+    generate_latency_chart(metrics_list, results_dir)
+    generate_performance_summary_chart(metrics_list, results_dir)
+
+    # Generate reports
+    print("\nGenerating reports...")
+    generate_text_report(metrics_list, results_dir)
+    generate_html_report(metrics_list, results_dir)
+
+    print("\n✅ Analysis complete! Check the results directory for:")
+    print("  - warp_throughput_performance.png")
+    print("  - warp_latency_analysis.png")
+    print("  - warp_performance_summary.png")
+    print("  - warp_analysis_report.txt")
+    print("  - warp_benchmark_report.html")
+
+    return 0
+
+
+if __name__ == "__main__":
+    sys.exit(main())
diff --git a/workflows/minio/scripts/generate_warp_report.py b/workflows/minio/scripts/generate_warp_report.py
new file mode 100755
index 0000000..2eff522
--- /dev/null
+++ b/workflows/minio/scripts/generate_warp_report.py
@@ -0,0 +1,404 @@
+#!/usr/bin/env python3
+"""
+Generate graphs and HTML report from MinIO Warp benchmark results
+"""
+
+import json
+import os
+import sys
+import glob
+from datetime import datetime
+import matplotlib.pyplot as plt
+import matplotlib.dates as mdates
+from pathlib import Path
+
+
+def parse_warp_json(json_file):
+    """Parse Warp benchmark JSON output"""
+    with open(json_file, "r") as f:
+        content = f.read()
+        # Find the JSON object in the output (skip any non-JSON prefix)
+        json_start = content.find("{")
+        if json_start == -1:
+            raise ValueError(f"No JSON found in {json_file}")
+        json_content = content[json_start:]
+        return json.loads(json_content)
+
+
+def generate_throughput_graph(data, output_dir, filename_prefix):
+    """Generate throughput over time graph"""
+    segments = data["total"]["throughput"]["segmented"]["segments"]
+
+    # Extract timestamps and throughput values
+    times = []
+    throughput_mbps = []
+    ops_per_sec = []
+
+    for segment in segments:
+        time_str = segment["start"]
+        # Parse timestamp
+        timestamp = datetime.fromisoformat(time_str.replace("-07:00", "+00:00"))
+        times.append(timestamp)
+        throughput_mbps.append(
+            segment["bytes_per_sec"] / (1024 * 1024)
+        )  # Convert to MB/s
+        ops_per_sec.append(segment["obj_per_sec"])
+
+    # Create figure with two subplots
+    fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 8))
+
+    # Throughput graph
+    ax1.plot(times, throughput_mbps, "b-", linewidth=2, marker="o")
+    ax1.set_ylabel("Throughput (MB/s)", fontsize=12)
+    ax1.set_title("MinIO Warp Benchmark - Throughput Over Time", fontsize=14)
+    ax1.grid(True, alpha=0.3)
+    ax1.xaxis.set_major_formatter(mdates.DateFormatter("%H:%M:%S"))
+
+    # Add average line
+    avg_throughput = sum(throughput_mbps) / len(throughput_mbps)
+    ax1.axhline(
+        y=avg_throughput,
+        color="r",
+        linestyle="--",
+        alpha=0.7,
+        label=f"Average: {avg_throughput:.1f} MB/s",
+    )
+    ax1.legend()
+
+    # Operations per second graph
+    ax2.plot(times, ops_per_sec, "g-", linewidth=2, marker="s")
+    ax2.set_xlabel("Time", fontsize=12)
+    ax2.set_ylabel("Operations/sec", fontsize=12)
+    ax2.set_title("Operations Per Second", fontsize=14)
+    ax2.grid(True, alpha=0.3)
+    ax2.xaxis.set_major_formatter(mdates.DateFormatter("%H:%M:%S"))
+
+    # Add average line
+    avg_ops = sum(ops_per_sec) / len(ops_per_sec)
+    ax2.axhline(
+        y=avg_ops,
+        color="r",
+        linestyle="--",
+        alpha=0.7,
+        label=f"Average: {avg_ops:.1f} ops/s",
+    )
+    ax2.legend()
+
+    plt.gcf().autofmt_xdate()
+    plt.tight_layout()
+
+    graph_file = os.path.join(output_dir, f"{filename_prefix}_throughput.png")
+    plt.savefig(graph_file, dpi=100, bbox_inches="tight")
+    plt.close()
+
+    return graph_file
+
+
+def generate_operation_stats_graph(data, output_dir, filename_prefix):
+    """Generate operation statistics bar chart"""
+    operations = data.get("operations", {})
+
+    if not operations:
+        return None
+
+    op_types = []
+    throughputs = []
+    latencies = []
+
+    for op_type, op_data in operations.items():
+        if op_type in ["DELETE", "GET", "PUT", "STAT"]:
+            op_types.append(op_type)
+            if "throughput" in op_data:
+                throughputs.append(op_data["throughput"]["obj_per_sec"])
+            if "latency" in op_data:
+                # Convert nanoseconds to milliseconds
+                latencies.append(op_data["latency"]["mean"] / 1_000_000)
+
+    if not op_types:
+        return None
+
+    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
+
+    # Throughput bar chart
+    if throughputs:
+        ax1.bar(op_types, throughputs, color=["blue", "green", "red", "orange"])
+        ax1.set_ylabel("Operations/sec", fontsize=12)
+        ax1.set_title("Operation Throughput", fontsize=14)
+        ax1.grid(True, axis="y", alpha=0.3)
+
+    # Latency bar chart
+    if latencies:
+        ax2.bar(op_types, latencies, color=["blue", "green", "red", "orange"])
+        ax2.set_ylabel("Latency (ms)", fontsize=12)
+        ax2.set_title("Operation Latency (Mean)", fontsize=14)
+        ax2.grid(True, axis="y", alpha=0.3)
+
+    plt.tight_layout()
+
+    graph_file = os.path.join(output_dir, f"{filename_prefix}_operations.png")
+    plt.savefig(graph_file, dpi=100, bbox_inches="tight")
+    plt.close()
+
+    return graph_file
+
+
+def generate_html_report(json_files, output_dir):
+    """Generate HTML report with all results"""
+    timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
+
+    html_content = f"""<!DOCTYPE html>
+<html lang="en">
+<head>
+    <meta charset="UTF-8">
+    <meta name="viewport" content="width=device-width, initial-scale=1.0">
+    <title>MinIO Warp Benchmark Results</title>
+    <style>
+        body {{
+            font-family: Arial, sans-serif;
+            margin: 20px;
+            background-color: #f5f5f5;
+        }}
+        h1 {{
+            color: #333;
+            border-bottom: 3px solid #4CAF50;
+            padding-bottom: 10px;
+        }}
+        h2 {{
+            color: #666;
+            margin-top: 30px;
+        }}
+        .result-section {{
+            background-color: white;
+            padding: 20px;
+            margin-bottom: 30px;
+            border-radius: 8px;
+            box-shadow: 0 2px 4px rgba(0,0,0,0.1);
+        }}
+        .stats-grid {{
+            display: grid;
+            grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
+            gap: 20px;
+            margin: 20px 0;
+        }}
+        .stat-card {{
+            background-color: #f9f9f9;
+            padding: 15px;
+            border-radius: 5px;
+            border-left: 4px solid #4CAF50;
+        }}
+        .stat-label {{
+            color: #666;
+            font-size: 0.9em;
+        }}
+        .stat-value {{
+            font-size: 1.5em;
+            font-weight: bold;
+            color: #333;
+            margin-top: 5px;
+        }}
+        img {{
+            max-width: 100%;
+            height: auto;
+            margin: 20px 0;
+            border: 1px solid #ddd;
+            border-radius: 5px;
+        }}
+        table {{
+            width: 100%;
+            border-collapse: collapse;
+            margin: 20px 0;
+        }}
+        th, td {{
+            padding: 10px;
+            text-align: left;
+            border-bottom: 1px solid #ddd;
+        }}
+        th {{
+            background-color: #4CAF50;
+            color: white;
+        }}
+        tr:hover {{
+            background-color: #f5f5f5;
+        }}
+        .timestamp {{
+            color: #666;
+            font-style: italic;
+        }}
+    </style>
+</head>
+<body>
+    <h1>MinIO Warp Benchmark Results</h1>
+    <p class="timestamp">Generated: {timestamp}</p>
+"""
+
+    for json_file in sorted(json_files, reverse=True):
+        try:
+            data = parse_warp_json(json_file)
+            filename = os.path.basename(json_file)
+            filename_prefix = filename.replace(".json", "")
+
+            # Extract key metrics
+            total = data["total"]
+            total_requests = total.get("total_requests", 0)
+            total_objects = total.get("total_objects", 0)
+            total_errors = total.get("total_errors", 0)
+            total_bytes = total.get("total_bytes", 0)
+            concurrency = total.get("concurrency", 0)
+
+            throughput_data = total.get("throughput", {}).get("segmented", {})
+            fastest_bps = throughput_data.get("fastest_bps", 0) / (1024 * 1024)  # MB/s
+            slowest_bps = throughput_data.get("slowest_bps", 0) / (1024 * 1024)  # MB/s
+            average_bps = throughput_data.get("average_bps", 0) / (1024 * 1024)  # MB/s
+
+            html_content += f"""
+    <div class="result-section">
+        <h2>{filename}</h2>
+
+        <div class="stats-grid">
+            <div class="stat-card">
+                <div class="stat-label">Total Requests</div>
+                <div class="stat-value">{total_requests:,}</div>
+            </div>
+            <div class="stat-card">
+                <div class="stat-label">Total Objects</div>
+                <div class="stat-value">{total_objects:,}</div>
+            </div>
+            <div class="stat-card">
+                <div class="stat-label">Total Errors</div>
+                <div class="stat-value">{total_errors}</div>
+            </div>
+            <div class="stat-card">
+                <div class="stat-label">Total Data</div>
+                <div class="stat-value">{total_bytes / (1024**3):.2f} GB</div>
+            </div>
+            <div class="stat-card">
+                <div class="stat-label">Concurrency</div>
+                <div class="stat-value">{concurrency}</div>
+            </div>
+            <div class="stat-card">
+                <div class="stat-label">Average Throughput</div>
+                <div class="stat-value">{average_bps:.1f} MB/s</div>
+            </div>
+            <div class="stat-card">
+                <div class="stat-label">Fastest Throughput</div>
+                <div class="stat-value">{fastest_bps:.1f} MB/s</div>
+            </div>
+            <div class="stat-card">
+                <div class="stat-label">Slowest Throughput</div>
+                <div class="stat-value">{slowest_bps:.1f} MB/s</div>
+            </div>
+        </div>
+"""
+
+            # Generate graphs
+            throughput_graph = generate_throughput_graph(
+                data, output_dir, filename_prefix
+            )
+            if throughput_graph:
+                rel_path = os.path.basename(throughput_graph)
+                html_content += (
+                    f'        <img src="{rel_path}" alt="Throughput Graph">\n'
+                )
+
+            ops_graph = generate_operation_stats_graph(
+                data, output_dir, filename_prefix
+            )
+            if ops_graph:
+                rel_path = os.path.basename(ops_graph)
+                html_content += (
+                    f'        <img src="{rel_path}" alt="Operations Statistics">\n'
+                )
+
+            # Add operations table if available
+            operations = data.get("operations", {})
+            if operations:
+                html_content += """
+        <h3>Operation Details</h3>
+        <table>
+            <tr>
+                <th>Operation</th>
+                <th>Throughput (ops/s)</th>
+                <th>Mean Latency (ms)</th>
+                <th>Min Latency (ms)</th>
+                <th>Max Latency (ms)</th>
+            </tr>
+"""
+                for op_type, op_data in operations.items():
+                    if op_type in ["DELETE", "GET", "PUT", "STAT"]:
+                        throughput = op_data.get("throughput", {}).get("obj_per_sec", 0)
+                        latency = op_data.get("latency", {})
+                        mean_lat = latency.get("mean", 0) / 1_000_000
+                        min_lat = latency.get("min", 0) / 1_000_000
+                        max_lat = latency.get("max", 0) / 1_000_000
+
+                        html_content += f"""
+            <tr>
+                <td>{op_type}</td>
+                <td>{throughput:.2f}</td>
+                <td>{mean_lat:.2f}</td>
+                <td>{min_lat:.2f}</td>
+                <td>{max_lat:.2f}</td>
+            </tr>
+"""
+                html_content += "        </table>\n"
+
+            html_content += "    </div>\n"
+
+        except Exception as e:
+            print(f"Error processing {json_file}: {e}")
+            continue
+
+    html_content += """
+</body>
+</html>
+"""
+
+    html_file = os.path.join(output_dir, "warp_benchmark_report.html")
+    with open(html_file, "w") as f:
+        f.write(html_content)
+
+    return html_file
+
+
+def main():
+    if len(sys.argv) > 1:
+        results_dir = sys.argv[1]
+    else:
+        # Default to workflows/minio/results
+        script_dir = Path(__file__).parent.absolute()
+        results_dir = script_dir.parent / "results"
+
+    results_dir = Path(results_dir)
+    if not results_dir.exists():
+        print(f"Results directory {results_dir} does not exist")
+        sys.exit(1)
+
+    # Find all JSON files
+    json_files = list(results_dir.glob("warp_benchmark_*.json"))
+
+    if not json_files:
+        print(f"No warp_benchmark_*.json files found in {results_dir}")
+        sys.exit(1)
+
+    print(f"Found {len(json_files)} result files")
+
+    # Generate HTML report with graphs
+    html_file = generate_html_report(json_files, results_dir)
+    print(f"Generated HTML report: {html_file}")
+
+    # Also generate individual graphs for latest result
+    latest_json = max(json_files, key=os.path.getctime)
+    data = parse_warp_json(latest_json)
+    filename_prefix = latest_json.stem
+
+    throughput_graph = generate_throughput_graph(data, results_dir, filename_prefix)
+    if throughput_graph:
+        print(f"Generated throughput graph: {throughput_graph}")
+
+    ops_graph = generate_operation_stats_graph(data, results_dir, filename_prefix)
+    if ops_graph:
+        print(f"Generated operations graph: {ops_graph}")
+
+
+if __name__ == "__main__":
+    main()
diff --git a/workflows/minio/scripts/run_benchmark_suite.sh b/workflows/minio/scripts/run_benchmark_suite.sh
new file mode 100755
index 0000000..ca0531b
--- /dev/null
+++ b/workflows/minio/scripts/run_benchmark_suite.sh
@@ -0,0 +1,116 @@
+#!/bin/bash
+# Run comprehensive MinIO Warp benchmark suite
+
+MINIO_HOST="${1:-localhost:9000}"
+ACCESS_KEY="${2:-minioadmin}"
+SECRET_KEY="${3:-minioadmin}"
+TOTAL_DURATION="${4:-30m}"
+RESULTS_DIR="/tmp/warp-results"
+TIMESTAMP=$(date +%s)
+
+# Parse duration to seconds for calculation
+parse_duration_to_seconds() {
+    local duration="$1"
+    local value="${duration//[^0-9]/}"
+    local unit="${duration//[0-9]/}"
+    
+    case "$unit" in
+        s) echo "$value" ;;
+        m) echo $((value * 60)) ;;
+        h) echo $((value * 3600)) ;;
+        *) echo "1800" ;;  # Default 30 minutes
+    esac
+}
+
+TOTAL_SECONDS=$(parse_duration_to_seconds "$TOTAL_DURATION")
+# Distribute time across 8 benchmark types (reserving some buffer)
+PER_TEST_SECONDS=$((TOTAL_SECONDS / 10))  # Divide by 10 to leave buffer
+if [ $PER_TEST_SECONDS -lt 30 ]; then
+    PER_TEST_SECONDS=30  # Minimum 30 seconds per test
+fi
+
+# Convert back to duration string
+if [ $PER_TEST_SECONDS -ge 3600 ]; then
+    PER_TEST_DURATION="$((PER_TEST_SECONDS / 3600))h"
+elif [ $PER_TEST_SECONDS -ge 60 ]; then
+    PER_TEST_DURATION="$((PER_TEST_SECONDS / 60))m"
+else
+    PER_TEST_DURATION="${PER_TEST_SECONDS}s"
+fi
+
+echo "🚀 MinIO Warp Comprehensive Benchmark Suite"
+echo "==========================================="
+echo "Target: $MINIO_HOST"
+echo "Total Duration: $TOTAL_DURATION ($TOTAL_SECONDS seconds)"
+echo "Per Test Duration: $PER_TEST_DURATION"
+echo "Results: $RESULTS_DIR"
+echo ""
+
+# Ensure results directory exists
+mkdir -p "$RESULTS_DIR"
+
+# Function to run a benchmark
+run_benchmark() {
+    local test_type=$1
+    local duration=$2
+    local concurrent=$3
+    local obj_size=$4
+
+    echo "Running $test_type benchmark..."
+    echo "  Duration: $duration, Concurrent: $concurrent, Size: $obj_size"
+
+    OUTPUT_FILE="${RESULTS_DIR}/warp_${test_type}_${TIMESTAMP}.json"
+
+    # Don't use --autoterm or --objects for duration-based tests
+    warp "$test_type" \
+        --host="$MINIO_HOST" \
+        --access-key="$ACCESS_KEY" \
+        --secret-key="$SECRET_KEY" \
+        --bucket="warp-bench-${test_type}" \
+        --duration="$duration" \
+        --concurrent="$concurrent" \
+        --obj.size="$obj_size" \
+        --noclear \
+        --json > "$OUTPUT_FILE" 2>&1
+
+    if [ $? -eq 0 ]; then
+        echo "✅ $test_type completed successfully"
+    else
+        echo "❌ $test_type failed"
+    fi
+    echo ""
+}
+
+# Run comprehensive test suite
+echo "📊 Starting Comprehensive Benchmark Suite"
+echo "-----------------------------------------"
+
+# 1. Mixed workload (simulates real-world usage)
+run_benchmark "mixed" "$PER_TEST_DURATION" "10" "1MB"
+
+# 2. GET performance (read-heavy workload)
+run_benchmark "get" "$PER_TEST_DURATION" "20" "1MB"
+
+# 3. PUT performance (write-heavy workload)
+run_benchmark "put" "$PER_TEST_DURATION" "10" "10MB"
+
+# 4. DELETE performance
+run_benchmark "delete" "$PER_TEST_DURATION" "5" "1MB"
+
+# 5. LIST operations (metadata operations)
+run_benchmark "list" "$PER_TEST_DURATION" "5" "1KB"
+
+# 6. Small object performance
+run_benchmark "mixed" "$PER_TEST_DURATION" "10" "1KB"
+
+# 7. Large object performance
+run_benchmark "mixed" "$PER_TEST_DURATION" "5" "100MB"
+
+# 8. High concurrency test
+run_benchmark "mixed" "$PER_TEST_DURATION" "50" "1MB"
+
+echo "==========================================="
+echo "✅ Benchmark Suite Complete!"
+echo ""
+echo "Results saved in: $RESULTS_DIR"
+echo "Run 'make minio-results' to generate analysis"
-- 
2.50.1


      parent reply	other threads:[~2025-08-31  4:12 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-31  4:11 [PATCH v2 0/4] declared hosts support Luis Chamberlain
2025-08-31  4:11 ` [PATCH v2 1/4] gen_hosts: use kdevops_workflow_name directly for template selection Luis Chamberlain
2025-08-31  4:11 ` [PATCH v2 2/4] declared_hosts: add support for pre-existing infrastructure Luis Chamberlain
2025-08-31  4:11 ` [PATCH v2 3/4] Makefile: add missing extra_vars.yaml dependencies Luis Chamberlain
2025-08-31  4:12 ` Luis Chamberlain [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250831041202.2172115-5-mcgrof@kernel.org \
    --to=mcgrof@kernel.org \
    --cc=cel@kernel.org \
    --cc=da.gomez@kruces.com \
    --cc=hui81.qi@samsung.com \
    --cc=kdevops@lists.linux.dev \
    --cc=kundan.kumar@samsung.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox