kdevops.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] declared hosts support
@ 2025-09-02 23:53 Luis Chamberlain
  2025-09-02 23:53 ` [PATCH v3 1/3] gen_hosts: use kdevops_workflow_name directly for template selection Luis Chamberlain
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Luis Chamberlain @ 2025-09-02 23:53 UTC (permalink / raw)
  To: Chuck Lever, Daniel Gomez, kdevops
  Cc: hui81.qi, kundan.kumar, Luis Chamberlain

This v3 now goes tested, and I simplfied the workflow so we just
put a stop gap at the data partition on the role instead of
adding tons of check on the tasks on the create_data_partition
role in different tasks.

This demos the DECLARE_HOSTS feature with minio warp, a new simple
workflow we can test for S3 with minio.

Luis Chamberlain (3):
  gen_hosts: use kdevops_workflow_name directly for template selection
  declared_hosts: add support for pre-existing infrastructure
  minio: add MinIO Warp S3 benchmarking with declared hosts support

 .gitignore                                    |   2 +
 Makefile                                      |   7 +
 defconfigs/minio-warp                         |  40 +
 defconfigs/minio-warp-ab                      |  41 +
 defconfigs/minio-warp-btrfs                   |  35 +
 defconfigs/minio-warp-declared-hosts          |  56 ++
 defconfigs/minio-warp-multifs                 |  74 ++
 defconfigs/minio-warp-storage                 |  65 ++
 defconfigs/minio-warp-xfs                     |  53 ++
 defconfigs/minio-warp-xfs-16k                 |  53 ++
 defconfigs/minio-warp-xfs-lbs                 |  65 ++
 kconfigs/Kconfig.bringup                      |   8 +
 kconfigs/Kconfig.declared_hosts               |  71 ++
 kconfigs/workflows/Kconfig                    |  51 ++
 kconfigs/workflows/Kconfig.data_partition     |  13 +-
 playbooks/create_data_partition.yml           |   2 +
 playbooks/minio.yml                           |  53 ++
 playbooks/roles/ai_setup/tasks/main.yml       |  40 +-
 .../create_data_partition/defaults/main.yml   |   1 +
 playbooks/roles/devconfig/defaults/main.yml   |   2 +
 playbooks/roles/devconfig/tasks/main.yml      |  25 +
 playbooks/roles/gen_hosts/defaults/main.yml   |   1 +
 playbooks/roles/gen_hosts/tasks/main.yml      | 315 ++-----
 playbooks/roles/gen_hosts/templates/hosts.j2  | 242 +----
 .../roles/gen_hosts/templates/workflows/ai.j2 |  99 ++
 .../gen_hosts/templates/workflows/blktests.j2 |  58 ++
 .../gen_hosts/templates/workflows/cxl.j2      |   7 +
 .../templates/workflows/declared-hosts.j2     | 239 +++++
 .../templates/workflows/fio-tests.j2          |  38 +
 .../gen_hosts/templates/workflows/fstests.j2  |  72 ++
 .../gen_hosts/templates/workflows/gitr.j2     |  41 +
 .../gen_hosts/templates/workflows/linux.j2    | 110 +++
 .../gen_hosts/templates/workflows/ltp.j2      |  41 +
 .../gen_hosts/templates/workflows/minio.j2    | 173 ++++
 .../gen_hosts/templates/workflows/mix.j2      |  62 ++
 .../gen_hosts/templates/workflows/mmtests.j2  |  77 ++
 .../gen_hosts/templates/workflows/nfstest.j2  |  41 +
 .../gen_hosts/templates/workflows/pynfs.j2    |   7 +
 .../templates/workflows/reboot-limit.j2       |  33 +
 .../templates/workflows/selftests.j2          |  53 ++
 .../gen_hosts/templates/workflows/sysbench.j2 |  53 ++
 playbooks/roles/gen_nodes/tasks/main.yml      | 116 +++
 playbooks/roles/minio_destroy/tasks/main.yml  |  34 +
 playbooks/roles/minio_install/tasks/main.yml  |  61 ++
 playbooks/roles/minio_results/tasks/main.yml  |  86 ++
 playbooks/roles/minio_setup/defaults/main.yml |  16 +
 playbooks/roles/minio_setup/tasks/main.yml    | 100 ++
 .../roles/minio_uninstall/tasks/main.yml      |  17 +
 playbooks/roles/minio_warp_run/tasks/main.yml | 249 +++++
 .../templates/warp_config.json.j2             |  14 +
 workflows/Makefile                            |   4 +
 workflows/ai/Makefile                         |   3 -
 workflows/blktests/Makefile                   |   3 -
 workflows/cxl/Makefile                        |   2 -
 workflows/demos/reboot-limit/Kconfig          |   5 +
 workflows/fio-tests/Makefile                  |   3 -
 workflows/fstests/Makefile                    |   3 -
 workflows/gitr/Makefile                       |   3 -
 workflows/linux/Makefile                      |   1 -
 workflows/ltp/Makefile                        |   3 -
 workflows/minio/Kconfig                       |  23 +
 workflows/minio/Kconfig.docker                |  66 ++
 workflows/minio/Kconfig.storage               | 364 ++++++++
 workflows/minio/Kconfig.warp                  | 141 +++
 workflows/minio/Makefile                      |  76 ++
 .../minio/scripts/analyze_warp_results.py     | 858 ++++++++++++++++++
 .../minio/scripts/generate_warp_report.py     | 404 +++++++++
 .../minio/scripts/run_benchmark_suite.sh      | 116 +++
 workflows/mmtests/Makefile                    |   3 -
 workflows/nfstest/Makefile                    |   3 -
 workflows/pynfs/Makefile                      |   3 -
 workflows/selftests/Makefile                  |   3 -
 workflows/sysbench/Makefile                   |   3 -
 73 files changed, 4751 insertions(+), 554 deletions(-)
 create mode 100644 defconfigs/minio-warp
 create mode 100644 defconfigs/minio-warp-ab
 create mode 100644 defconfigs/minio-warp-btrfs
 create mode 100644 defconfigs/minio-warp-declared-hosts
 create mode 100644 defconfigs/minio-warp-multifs
 create mode 100644 defconfigs/minio-warp-storage
 create mode 100644 defconfigs/minio-warp-xfs
 create mode 100644 defconfigs/minio-warp-xfs-16k
 create mode 100644 defconfigs/minio-warp-xfs-lbs
 create mode 100644 kconfigs/Kconfig.declared_hosts
 create mode 100644 playbooks/minio.yml
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/ai.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/blktests.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/cxl.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/declared-hosts.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/fio-tests.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/fstests.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/gitr.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/linux.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/ltp.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/minio.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/mix.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/mmtests.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/nfstest.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/pynfs.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/reboot-limit.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/selftests.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/sysbench.j2
 create mode 100644 playbooks/roles/minio_destroy/tasks/main.yml
 create mode 100644 playbooks/roles/minio_install/tasks/main.yml
 create mode 100644 playbooks/roles/minio_results/tasks/main.yml
 create mode 100644 playbooks/roles/minio_setup/defaults/main.yml
 create mode 100644 playbooks/roles/minio_setup/tasks/main.yml
 create mode 100644 playbooks/roles/minio_uninstall/tasks/main.yml
 create mode 100644 playbooks/roles/minio_warp_run/tasks/main.yml
 create mode 100644 playbooks/roles/minio_warp_run/templates/warp_config.json.j2
 create mode 100644 workflows/minio/Kconfig
 create mode 100644 workflows/minio/Kconfig.docker
 create mode 100644 workflows/minio/Kconfig.storage
 create mode 100644 workflows/minio/Kconfig.warp
 create mode 100644 workflows/minio/Makefile
 create mode 100755 workflows/minio/scripts/analyze_warp_results.py
 create mode 100755 workflows/minio/scripts/generate_warp_report.py
 create mode 100755 workflows/minio/scripts/run_benchmark_suite.sh

-- 
2.50.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v3 1/3] gen_hosts: use kdevops_workflow_name directly for template selection
  2025-09-02 23:53 [PATCH v3 0/3] declared hosts support Luis Chamberlain
@ 2025-09-02 23:53 ` Luis Chamberlain
  2025-09-02 23:53 ` [PATCH v3 2/3] declared_hosts: add support for pre-existing infrastructure Luis Chamberlain
  2025-09-02 23:53 ` [PATCH v3 3/3] minio: add MinIO Warp S3 benchmarking with declared hosts support Luis Chamberlain
  2 siblings, 0 replies; 5+ messages in thread
From: Luis Chamberlain @ 2025-09-02 23:53 UTC (permalink / raw)
  To: Chuck Lever, Daniel Gomez, kdevops
  Cc: hui81.qi, kundan.kumar, Luis Chamberlain

The hosts.j2 template had become unwieldy with 40+ lines of conditional
logic to select which workflow template to include. Since kdevops already
defines KDEVOPS_WORKFLOW_NAME in Kconfig that's always set (either to the
workflow name or "mix" for non-dedicated), we can eliminate ALL conditional
logic.

The entire hosts.j2 is now just:
  {% include 'workflows/' + kdevops_workflow_name + '.j2' %}

This massive simplification:
- Reduces hosts.j2 from 40+ lines to just 1 line of logic
- Removes ALL conditional template selection from gen_hosts playbook
- Eliminates ALL workflow-specific template overrides from Makefiles
- Makes adding new workflows trivial - just define KDEVOPS_WORKFLOW_NAME

Additional changes:
- Split monolithic hosts.j2 into per-workflow templates under workflows/
- Rename default.j2 to mix.j2 to match non-dedicated workflow name
- Add missing 'cxl' to KDEVOPS_WORKFLOW_NAME in main Kconfig
- Add KDEVOPS_WORKFLOW_NAME to reboot-limit demo workflow

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 kconfigs/workflows/Kconfig                    |   9 +
 playbooks/roles/gen_hosts/tasks/main.yml      | 293 ++----------------
 playbooks/roles/gen_hosts/templates/hosts.j2  | 240 +-------------
 .../roles/gen_hosts/templates/workflows/ai.j2 |  99 ++++++
 .../gen_hosts/templates/workflows/blktests.j2 |  58 ++++
 .../gen_hosts/templates/workflows/cxl.j2      |   7 +
 .../templates/workflows/fio-tests.j2          |  38 +++
 .../gen_hosts/templates/workflows/fstests.j2  |  72 +++++
 .../gen_hosts/templates/workflows/gitr.j2     |  41 +++
 .../gen_hosts/templates/workflows/linux.j2    | 110 +++++++
 .../gen_hosts/templates/workflows/ltp.j2      |  41 +++
 .../gen_hosts/templates/workflows/mix.j2      |  62 ++++
 .../gen_hosts/templates/workflows/mmtests.j2  |  77 +++++
 .../gen_hosts/templates/workflows/nfstest.j2  |  41 +++
 .../gen_hosts/templates/workflows/pynfs.j2    |   7 +
 .../templates/workflows/reboot-limit.j2       |  33 ++
 .../templates/workflows/selftests.j2          |  53 ++++
 .../gen_hosts/templates/workflows/sysbench.j2 |  53 ++++
 workflows/ai/Makefile                         |   3 -
 workflows/blktests/Makefile                   |   3 -
 workflows/cxl/Makefile                        |   2 -
 workflows/demos/reboot-limit/Kconfig          |   5 +
 workflows/fio-tests/Makefile                  |   3 -
 workflows/fstests/Makefile                    |   3 -
 workflows/gitr/Makefile                       |   3 -
 workflows/linux/Makefile                      |   1 -
 workflows/ltp/Makefile                        |   3 -
 workflows/mmtests/Makefile                    |   3 -
 workflows/nfstest/Makefile                    |   3 -
 workflows/pynfs/Makefile                      |   3 -
 workflows/selftests/Makefile                  |   3 -
 workflows/sysbench/Makefile                   |   3 -
 32 files changed, 841 insertions(+), 534 deletions(-)
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/ai.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/blktests.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/cxl.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/fio-tests.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/fstests.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/gitr.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/linux.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/ltp.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/mix.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/mmtests.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/nfstest.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/pynfs.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/reboot-limit.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/selftests.j2
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/sysbench.j2

diff --git a/kconfigs/workflows/Kconfig b/kconfigs/workflows/Kconfig
index 70898a1a..de279b48 100644
--- a/kconfigs/workflows/Kconfig
+++ b/kconfigs/workflows/Kconfig
@@ -228,6 +228,7 @@ config KDEVOPS_WORKFLOW_NAME
 	output yaml
 	default "fstests" if KDEVOPS_WORKFLOW_DEDICATE_FSTESTS
 	default "blktests" if KDEVOPS_WORKFLOW_DEDICATE_BLKTESTS
+	default "cxl" if KDEVOPS_WORKFLOW_DEDICATE_CXL
 	default "pynfs" if KDEVOPS_WORKFLOW_DEDICATE_PYNFS
 	default "selftests" if KDEVOPS_WORKFLOW_DEDICATE_SELFTESTS
 	default "gitr" if KDEVOPS_WORKFLOW_DEDICATE_GITR
@@ -514,4 +515,12 @@ endif # WORKFLOWS_LINUX_TESTS
 
 endif # WORKFLOWS_TESTS
 
+# For Linux custom workflow without tests, still need workflow name for templating
+if WORKFLOW_LINUX_CUSTOM && !WORKFLOWS_TESTS
+config KDEVOPS_WORKFLOW_NAME
+	string
+	output yaml
+	default "linux"
+endif
+
 endif # WORKFLOWS
diff --git a/playbooks/roles/gen_hosts/tasks/main.yml b/playbooks/roles/gen_hosts/tasks/main.yml
index fb63629a..518064ed 100644
--- a/playbooks/roles/gen_hosts/tasks/main.yml
+++ b/playbooks/roles/gen_hosts/tasks/main.yml
@@ -56,180 +56,22 @@
   when:
     - is_fstests|bool
 
-- name: Generate the Ansible hosts file for a Linux kernel build
-  tags: ["hosts"]
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ ansible_cfg_inventory }}"
-    force: true
-    trim_blocks: true
-    lstrip_blocks: true
-  when:
-    - bootlinux_builder
-    - ansible_hosts_template.stat.exists
-
-- name: Generate the Ansible inventory file
-  tags: ["hosts"]
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ ansible_cfg_inventory }}"
-    force: true
-    trim_blocks: true
-    lstrip_blocks: true
-  when:
-    - not kdevops_workflows_dedicated_workflow
-    - ansible_hosts_template.stat.exists
-    - not kdevops_enable_nixos|default(false)|bool
-
-- name: Generate the Ansible inventory file for NixOS
-  tags: ['hosts']
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ ansible_cfg_inventory }}"
-    force: true
-    trim_blocks: True
-    lstrip_blocks: True
-  when:
-    - not kdevops_workflows_dedicated_workflow
-    - ansible_hosts_template.stat.exists
-    - kdevops_enable_nixos|default(false)|bool
-
-- name: Update Ansible inventory access modification time so make sees it updated
-  ansible.builtin.file:
-    path: "{{ ansible_cfg_inventory }}"
-    state: touch
-    mode: "0755"
-
-- name: Generate the Ansible inventory file for dedicated cxl work
-  tags: ["hosts"]
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ anisble_cfg_inventory }}"
-    force: true
-    trim_blocks: true
-    lstrip_blocks: true
-  when:
-    - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_cxl
-    - ansible_hosts_template.stat.exists
-
-- name: Generate the Ansible inventory file for dedicated pynfs work
-  tags: ["hosts"]
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ ansible_cfg_inventory }}"
-    force: true
-    trim_blocks: true
-    lstrip_blocks: true
-  when:
-    - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_pynfs
-    - ansible_hosts_template.stat.exists
-
-- name: Generate the Ansible inventory file for dedicated gitr workflow
-  tags: ["hosts"]
-  vars:
-    gitr_enabled_hosts: "{{ gitr_enabled_test_groups | ansible.builtin.split }}"
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ ansible_cfg_inventory }}"
-    force: true
-    trim_blocks: true
-    lstrip_blocks: true
-  when:
-    - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_gitr
-    - ansible_hosts_template.stat.exists
-
-- name: Generate an Ansible inventory file for a dedicated ltp workflow
+- name: Infer enabled fstests test section types
   tags: ["hosts"]
-  vars:
-    ltp_enabled_hosts: "{{ ltp_enabled_test_groups | ansible.builtin.split }}"
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ ansible_cfg_inventory }}"
-    force: true
-    trim_blocks: true
-    lstrip_blocks: true
-  when:
-    - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_ltp
-    - ansible_hosts_template.stat.exists
-
-- name: Generate the Ansible inventory file for dedicated nfstest workflow
-  tags: ["hosts"]
-  vars:
-    nfstest_enabled_hosts: "{{ nfstest_enabled_test_groups | ansible.builtin.split }}"
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ ansible_cfg_inventory }}"
-    force: true
-    trim_blocks: true
-    lstrip_blocks: true
-  when:
-    - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_nfstest
-    - ansible_hosts_template.stat.exists
-
-- name: Set empty enabled test types list for fstests
-  tags: ["hosts"]
-  ansible.builtin.set_fact:
-    fstests_enabled_test_types: []
-  when:
-    - is_fstests
-    - ansible_hosts_template.stat.exists
-
-- name: Check which fstests test types are enabled
-  register: fstests_enabled_test_types_reg
   vars:
     fs: "{{ fstests_fstyp | upper }}"
-    config_val: "{{ 'CONFIG_FSTESTS_' + fs + '_SECTION_' }}"
-    fs_config_data: "{{ lookup('file', fs_config_path) }}"
-    sections_without_default: "{{ fs_config_data | regex_replace('\\[default\\]', multiline=True) }}"
-    sections_without_default_and_nfsd: "{{ sections_without_default | regex_replace('\\[nfsd\\]', multiline=True) }}"
-    sections_lines: "{{ sections_without_default_and_nfsd | regex_findall('^\\[(.*)', multiline=True) }}"
-    clean_section_lines: "{{ sections_lines | regex_replace('\\[') | regex_replace('\\]') }}"
-    clean_section_lines_without_fsname: "{{ clean_section_lines | regex_replace(fstests_fstyp + '_') }}"
-    config_sections_targets: "{{ clean_section_lines_without_fsname | replace(\"'\", '') | split(', ') }}"
-  ansible.builtin.lineinfile:
-    path: "{{ topdir_path }}/.config"
-    regexp: "^({{ config_val + item.upper() }})=y"
-    line: ""
-  check_mode: true
-  with_items: "{{ config_sections_targets }}"
-  loop_control:
-    label: "Checking for {{ config_val + item.upper() }}"
-  when:
-    - is_fstests
-    - ansible_hosts_template.stat.exists
-
-- name: Now expand the list of enabled fstests for valid configuration sections
-  tags: ["hosts"]
+    config_prefix: "{{ 'CONFIG_FSTESTS_' + fs + '_SECTION_' }}"
   ansible.builtin.set_fact:
-    fstests_enabled_test_types: "{{ fstests_enabled_test_types + [fstests_fstyp + '-' + item.item | regex_replace('_', '-')] }}"
-  with_items: "{{ fstests_enabled_test_types_reg.results }}"
-  loop_control:
-    label: "Checking for {{ item.item }} "
-  when:
-    - is_fstests
-    - ansible_hosts_template.stat.exists
-    - item.changed
-
-- name: Generate the Ansible inventory file for a dedicated fstests setup
-  tags: ["hosts"]
-  vars:
-    fs_config_data: "{{ lookup('file', fs_config_path) }}"
-    sections_without_default: "{{ fs_config_data | regex_replace('\\[default\\]', multiline=True) }}"
-    sections_lines: "{{ sections_without_default | regex_findall('^\\[(.*)', multiline=True) }}"
-    clean_section_lines: "{{ sections_lines | regex_replace('\\[') | regex_replace('\\]') }}"
-    sections_replace_underscore: "{{ clean_section_lines | replace('_', '-') }}"
-    sections: "{{ sections_replace_underscore | replace(\"'\", '') | split(', ') }}"
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ ansible_cfg_inventory }}"
-    force: true
-    trim_blocks: true
-    lstrip_blocks: true
+    fstests_enabled_test_types: >-
+      {{
+        lookup('file', topdir_path + '/.config')
+        | regex_findall('^' + config_prefix + '(.*)=y$', multiline=True)
+        | reject('match', '.*_ENABLED$')
+        | map('lower')
+        | map('regex_replace', '_', '-')
+        | map('regex_replace', '^', fstests_fstyp + '-')
+        | list
+      }}
   when:
     - is_fstests
     - ansible_hosts_template.stat.exists
@@ -257,19 +99,6 @@
     - kdevops_workflow_enable_blktests
     - ansible_hosts_template.stat.exists
 
-- name: Generate the Ansible inventory file for a dedicated blktests setup
-  tags: ["hosts"]
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ ansible_cfg_inventory }}"
-    force: true
-    trim_blocks: true
-    lstrip_blocks: true
-  when:
-    - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_blktests
-    - ansible_hosts_template.stat.exists
-
 - name: Infer enabled selftests test section types
   ansible.builtin.set_fact:
     selftests_enabled_test_types: >-
@@ -284,19 +113,6 @@
     - kdevops_workflow_enable_selftests
     - ansible_hosts_template.stat.exists
 
-- name: Generate the Ansible inventory file for a dedicated selftests setup
-  tags: ["hosts"]
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ ansible_cfg_inventory }}"
-    force: true
-    trim_blocks: true
-    lstrip_blocks: true
-  when:
-    - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_selftests
-    - ansible_hosts_template.stat.exists
-
 - name: Collect dynamically supported filesystems
   vars:
     supported_filesystems_variables: "{{ hostvars[inventory_hostname] | dict2items | selectattr('key', 'search', '^sysbench_supported_filesystem_') }}"
@@ -326,36 +142,6 @@
     - kdevops_workflows_dedicated_workflow
     - kdevops_workflow_enable_sysbench
 
-- name: Generate the Ansible inventory file for a dedicated sysbench setup
-  tags: ["hosts"]
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ ansible_cfg_inventory }}"
-    force: true
-    trim_blocks: true
-    lstrip_blocks: true
-  when:
-    - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_sysbench
-    - ansible_hosts_template.stat.exists
-
-
-- name: Generate the Ansible hosts file for a dedicated fio-tests setup
-  tags: ["hosts"]
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ ansible_cfg_inventory }}"
-    force: true
-    trim_blocks: true
-    lstrip_blocks: true
-    mode: "0644"
-  when:
-    - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_fio_tests
-    - ansible_hosts_template.stat.exists
-    - not kdevops_enable_nixos|default(false)|bool
-
-
 - name: Infer enabled mmtests test types
   ansible.builtin.set_fact:
     mmtests_enabled_test_types: >-
@@ -370,32 +156,6 @@
     - kdevops_workflow_enable_mmtests
     - ansible_hosts_template.stat.exists
 
-- name: Generate the Ansible hosts file for a dedicated mmtests setup
-  tags: ["hosts"]
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ ansible_cfg_inventory }}"
-    force: true
-    trim_blocks: true
-    lstrip_blocks: true
-  when:
-    - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_mmtests
-    - ansible_hosts_template.stat.exists
-
-- name: Generate the Ansible hosts file for a dedicated reboot-limit setup
-  tags: ["hosts"]
-  ansible.builtin.template:
-    src: "{{ kdevops_hosts_template }}"
-    dest: "{{ ansible_cfg_inventory }}"
-    force: true
-    trim_blocks: true
-    lstrip_blocks: true
-  when:
-    - kdevops_workflows_dedicated_workflow
-    - workflows_reboot_limit
-    - ansible_hosts_template.stat.exists
-
 - name: Load AI nodes configuration for multi-filesystem setup
   include_vars:
     file: "{{ topdir_path }}/{{ kdevops_nodes }}"
@@ -415,20 +175,35 @@
     - ai_enable_multifs_testing|default(false)|bool
     - guestfs_nodes is defined
 
-- name: Generate the Ansible hosts file for a dedicated AI setup
-  tags: ['hosts']
+- name: Generate the Ansible inventory file
+  tags: ["hosts"]
+  vars:
+    # Variables for specific workflows
+    gitr_enabled_hosts: "{{ gitr_enabled_test_groups | ansible.builtin.split if kdevops_workflow_enable_gitr else [] }}"
+    ltp_enabled_hosts: "{{ ltp_enabled_test_groups | ansible.builtin.split if kdevops_workflow_enable_ltp else [] }}"
+    nfstest_enabled_hosts: "{{ nfstest_enabled_test_groups | ansible.builtin.split if kdevops_workflow_enable_nfstest else [] }}"
+    # Variables for fstests
+    fs_config_data: "{{ lookup('file', fs_config_path) if is_fstests else '' }}"
+    sections_without_default: "{{ fs_config_data | regex_replace('\\[default\\]', multiline=True) if is_fstests else '' }}"
+    sections_lines: "{{ sections_without_default | regex_findall('^\\[(.*)', multiline=True) if is_fstests else [] }}"
+    clean_section_lines: "{{ sections_lines | regex_replace('\\[') | regex_replace('\\]') if is_fstests else [] }}"
+    sections_replace_underscore: "{{ clean_section_lines | replace('_', '-') if is_fstests else [] }}"
+    sections: "{{ sections_replace_underscore | replace(\"'\", '') | split(', ') if is_fstests else [] }}"
   ansible.builtin.template:
     src: "{{ kdevops_hosts_template }}"
     dest: "{{ ansible_cfg_inventory }}"
     force: true
-    trim_blocks: True
-    lstrip_blocks: True
-    mode: '0644'
+    trim_blocks: true
+    lstrip_blocks: true
   when:
-    - kdevops_workflows_dedicated_workflow
-    - kdevops_workflow_enable_ai
     - ansible_hosts_template.stat.exists
 
+- name: Update Ansible inventory access modification time so make sees it updated
+  ansible.builtin.file:
+    path: "{{ ansible_cfg_inventory }}"
+    state: touch
+    mode: "0755"
+
 - name: Verify if final host file exists
   ansible.builtin.stat:
     path: "{{ ansible_cfg_inventory }}"
diff --git a/playbooks/roles/gen_hosts/templates/hosts.j2 b/playbooks/roles/gen_hosts/templates/hosts.j2
index 0e896481..be0378db 100644
--- a/playbooks/roles/gen_hosts/templates/hosts.j2
+++ b/playbooks/roles/gen_hosts/templates/hosts.j2
@@ -5,242 +5,4 @@ proper identation. We don't need identation for the ansible hosts file.
 Each workflow which has its own custom ansible host file generated should use
 its own jinja2 template file and define its own ansible task for its generation.
 #}
-{% if kdevops_workflows_dedicated_workflow %}
-{% if workflows_reboot_limit %}
-[all]
-localhost ansible_connection=local
-{{ kdevops_host_prefix }}-reboot-limit
-{% if kdevops_baseline_and_dev %}
-{{ kdevops_host_prefix }}-reboot-limit-dev
-{% endif %}
-
-[all:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-[baseline]
-{{ kdevops_host_prefix }}-reboot-limit
-
-[baseline:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-{% if kdevops_baseline_and_dev %}
-[dev]
-{{ kdevops_host_prefix }}-reboot-limit-dev
-
-[dev:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-{% endif %}
-[reboot-limit]
-{{ kdevops_host_prefix }}-reboot-limit
-{% if kdevops_baseline_and_dev %}
-{{ kdevops_host_prefix }}-reboot-limit-dev
-{% endif %}
-
-[reboot-limit:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-{% elif kdevops_workflow_enable_fio_tests %}
-[all]
-localhost ansible_connection=local
-{{ kdevops_host_prefix }}-fio-tests
-{% if kdevops_baseline_and_dev %}
-{{ kdevops_host_prefix }}-fio-tests-dev
-{% endif %}
-
-[all:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-[baseline]
-{{ kdevops_host_prefix }}-fio-tests
-
-[baseline:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-{% if kdevops_baseline_and_dev %}
-[dev]
-{{ kdevops_host_prefix }}-fio-tests-dev
-
-[dev:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-{% endif %}
-[fio_tests]
-{{ kdevops_host_prefix }}-fio-tests
-{% if kdevops_baseline_and_dev %}
-{{ kdevops_host_prefix }}-fio-tests-dev
-{% endif %}
-
-[fio_tests:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-[service]
-
-[service:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-{% elif kdevops_workflow_enable_ai %}
-{% if ai_enable_multifs_testing|default(false)|bool %}
-{# Multi-filesystem section-based hosts #}
-[all]
-localhost ansible_connection=local
-{% for node in all_generic_nodes %}
-{{ node }}
-{% endfor %}
-
-[all:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-[baseline]
-{% for node in all_generic_nodes %}
-{% if not node.endswith('-dev') %}
-{{ node }}
-{% endif %}
-{% endfor %}
-
-[baseline:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-{% if kdevops_baseline_and_dev %}
-[dev]
-{% for node in all_generic_nodes %}
-{% if node.endswith('-dev') %}
-{{ node }}
-{% endif %}
-{% endfor %}
-
-[dev:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-{% endif %}
-[ai]
-{% for node in all_generic_nodes %}
-{{ node }}
-{% endfor %}
-
-[ai:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-{# Individual section groups for multi-filesystem testing #}
-{% set section_names = [] %}
-{% for node in all_generic_nodes %}
-{% if not node.endswith('-dev') %}
-{% set section = node.replace(kdevops_host_prefix + '-ai-', '') %}
-{% if section != kdevops_host_prefix + '-ai' %}
-{% if section_names.append(section) %}{% endif %}
-{% endif %}
-{% endif %}
-{% endfor %}
-
-{% for section in section_names %}
-[ai_{{ section | replace('-', '_') }}]
-{{ kdevops_host_prefix }}-ai-{{ section }}
-{% if kdevops_baseline_and_dev %}
-{{ kdevops_host_prefix }}-ai-{{ section }}-dev
-{% endif %}
-
-[ai_{{ section | replace('-', '_') }}:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-{% endfor %}
-{% else %}
-{# Single filesystem hosts (original behavior) #}
-[all]
-localhost ansible_connection=local
-{{ kdevops_host_prefix }}-ai
-{% if kdevops_baseline_and_dev %}
-{{ kdevops_host_prefix }}-ai-dev
-{% endif %}
-
-[all:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-[baseline]
-{{ kdevops_host_prefix }}-ai
-
-[baseline:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-{% if kdevops_baseline_and_dev %}
-[dev]
-{{ kdevops_host_prefix }}-ai-dev
-
-[dev:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-
-{% endif %}
-[ai]
-{{ kdevops_host_prefix }}-ai
-{% if kdevops_baseline_and_dev %}
-{{ kdevops_host_prefix }}-ai-dev
-{% endif %}
-
-[ai:vars]
-ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
-{% endif %}
-{% else %}
-[all]
-localhost ansible_connection=local
-write-your-own-template-for-your-workflow-and-task
-{% endif %}
-{% else %}
-[all]
-localhost ansible_connection=local
-{% if kdevops_enable_nixos|default(false) %}
-{{ kdevops_host_prefix }} ansible_python_interpreter=/run/current-system/sw/bin/python3
-{% else %}
-{{ kdevops_host_prefix }}
-{% endif %}
-{% if kdevops_baseline_and_dev == True %}
-{% if kdevops_enable_nixos|default(false) %}
-{{ kdevops_host_prefix }}-dev ansible_python_interpreter=/run/current-system/sw/bin/python3
-{% else %}
-{{ kdevops_host_prefix }}-dev
-{% endif %}
-{% endif %}
-{% if kdevops_enable_iscsi %}
-{{ kdevops_host_prefix }}-iscsi
-{% endif %}
-{% if kdevops_nfsd_enable %}
-{{ kdevops_host_prefix }}-nfsd
-{% endif %}
-[all:vars]
-ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
-[baseline]
-{% if kdevops_enable_nixos|default(false) %}
-{{ kdevops_host_prefix }} ansible_python_interpreter=/run/current-system/sw/bin/python3
-{% else %}
-{{ kdevops_host_prefix }}
-{% endif %}
-[baseline:vars]
-ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
-[dev]
-{% if kdevops_baseline_and_dev %}
-{% if kdevops_enable_nixos|default(false) %}
-{{ kdevops_host_prefix }}-dev ansible_python_interpreter=/run/current-system/sw/bin/python3
-{% else %}
-{{ kdevops_host_prefix }}-dev
-{% endif %}
-{% endif %}
-[dev:vars]
-ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
-{% if kdevops_enable_iscsi %}
-[iscsi]
-{{ kdevops_host_prefix }}-iscsi
-[iscsi:vars]
-ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
-{% endif %}
-{% if kdevops_nfsd_enable %}
-[nfsd]
-{{ kdevops_host_prefix }}-nfsd
-[nfsd:vars]
-ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
-{% endif %}
-[service]
-{% if kdevops_enable_iscsi %}
-{{ kdevops_host_prefix }}-iscsi
-{% endif %}
-{% if kdevops_nfsd_enable %}
-{{ kdevops_host_prefix }}-nfsd
-{% endif %}
-[service:vars]
-ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
-{% endif %}
+{% include 'workflows/' + kdevops_workflow_name + '.j2' %}
diff --git a/playbooks/roles/gen_hosts/templates/workflows/ai.j2 b/playbooks/roles/gen_hosts/templates/workflows/ai.j2
new file mode 100644
index 00000000..d0914436
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/ai.j2
@@ -0,0 +1,99 @@
+{# Workflow template for AI #}
+{% if ai_enable_multifs_testing|default(false)|bool %}
+{# Multi-filesystem section-based hosts #}
+[all]
+localhost ansible_connection=local
+{% for node in all_generic_nodes %}
+{{ node }}
+{% endfor %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{% for node in all_generic_nodes %}
+{% if not node.endswith('-dev') %}
+{{ node }}
+{% endif %}
+{% endfor %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{% for node in all_generic_nodes %}
+{% if node.endswith('-dev') %}
+{{ node }}
+{% endif %}
+{% endfor %}
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[ai]
+{% for node in all_generic_nodes %}
+{{ node }}
+{% endfor %}
+
+[ai:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{# Individual section groups for multi-filesystem testing #}
+{% set section_names = [] %}
+{% for node in all_generic_nodes %}
+{% if not node.endswith('-dev') %}
+{% set section = node.replace(kdevops_host_prefix + '-ai-', '') %}
+{% if section != kdevops_host_prefix + '-ai' %}
+{% if section_names.append(section) %}{% endif %}
+{% endif %}
+{% endif %}
+{% endfor %}
+
+{% for section in section_names %}
+[ai_{{ section | replace('-', '_') }}]
+{{ kdevops_host_prefix }}-ai-{{ section }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-ai-{{ section }}-dev
+{% endif %}
+
+[ai_{{ section | replace('-', '_') }}:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endfor %}
+{% else %}
+{# Single filesystem hosts (original behavior) #}
+[all]
+localhost ansible_connection=local
+{{ kdevops_host_prefix }}-ai
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-ai-dev
+{% endif %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{{ kdevops_host_prefix }}-ai
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{{ kdevops_host_prefix }}-ai-dev
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[ai]
+{{ kdevops_host_prefix }}-ai
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-ai-dev
+{% endif %}
+
+[ai:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
diff --git a/playbooks/roles/gen_hosts/templates/workflows/blktests.j2 b/playbooks/roles/gen_hosts/templates/workflows/blktests.j2
new file mode 100644
index 00000000..eea3eef3
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/blktests.j2
@@ -0,0 +1,58 @@
+{# Workflow template for blktests #}
+[all]
+localhost ansible_connection=local
+{% for test_type in blktests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ test_type }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ test_type }}-dev
+{% endif %}
+{% endfor %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{% for test_type in blktests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ test_type }}
+{% endfor %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{% for test_type in blktests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ test_type }}-dev
+{% endfor %}
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[blktests]
+{% for test_type in blktests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ test_type }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ test_type }}-dev
+{% endif %}
+{% endfor %}
+
+[blktests:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% for test_type in blktests_enabled_test_types %}
+[blktests_{{ test_type | replace('-', '_') }}]
+{{ kdevops_host_prefix }}-{{ test_type }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ test_type }}-dev
+{% endif %}
+
+[blktests_{{ test_type | replace('-', '_') }}:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endfor %}
+
+[service]
+
+[service:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
diff --git a/playbooks/roles/gen_hosts/templates/workflows/cxl.j2 b/playbooks/roles/gen_hosts/templates/workflows/cxl.j2
new file mode 100644
index 00000000..53790f29
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/cxl.j2
@@ -0,0 +1,7 @@
+{# Workflow template for CXL #}
+[all]
+localhost ansible_connection=local
+write-your-own-template-for-cxl-workflow
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
diff --git a/playbooks/roles/gen_hosts/templates/workflows/fio-tests.j2 b/playbooks/roles/gen_hosts/templates/workflows/fio-tests.j2
new file mode 100644
index 00000000..548941a0
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/fio-tests.j2
@@ -0,0 +1,38 @@
+{# Workflow template for fio-tests #}
+[all]
+localhost ansible_connection=local
+{{ kdevops_host_prefix }}-fio-tests
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-fio-tests-dev
+{% endif %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{{ kdevops_host_prefix }}-fio-tests
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{{ kdevops_host_prefix }}-fio-tests-dev
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[fio_tests]
+{{ kdevops_host_prefix }}-fio-tests
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-fio-tests-dev
+{% endif %}
+
+[fio_tests:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[service]
+
+[service:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
diff --git a/playbooks/roles/gen_hosts/templates/workflows/fstests.j2 b/playbooks/roles/gen_hosts/templates/workflows/fstests.j2
new file mode 100644
index 00000000..362ce955
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/fstests.j2
@@ -0,0 +1,72 @@
+{# Workflow template for fstests #}
+[all]
+localhost ansible_connection=local
+{% for node_section in fstests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ node_section }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ node_section }}-dev
+{% endif %}
+{% endfor %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{% for node_section in fstests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ node_section }}
+{% endfor %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{% for node_section in fstests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ node_section }}-dev
+{% endfor %}
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[fstests]
+{% for node_section in fstests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ node_section }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ node_section }}-dev
+{% endif %}
+{% endfor %}
+
+[fstests:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% for section in fstests_enabled_test_types %}
+[fstests_{{ section | replace('-', '_') }}]
+{{ kdevops_host_prefix }}-{{ section }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ section }}-dev
+{% endif %}
+
+[fstests_{{ section | replace('-', '_') }}:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endfor %}
+
+[nfsd]
+{% if kdevops_nfsd_enable %}
+{{ kdevops_host_prefix }}-nfsd
+{% endif %}
+
+[nfsd:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[service]
+{% if kdevops_enable_iscsi %}
+{{ kdevops_host_prefix }}-iscsi
+{% endif %}
+{% if kdevops_nfsd_enable %}
+{{ kdevops_host_prefix }}-nfsd
+{% endif %}
+
+[service:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
diff --git a/playbooks/roles/gen_hosts/templates/workflows/gitr.j2 b/playbooks/roles/gen_hosts/templates/workflows/gitr.j2
new file mode 100644
index 00000000..86ee9326
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/gitr.j2
@@ -0,0 +1,41 @@
+{# Workflow template for gitr #}
+[all]
+localhost ansible_connection=local
+{% for host in gitr_enabled_hosts %}
+{{ kdevops_host_prefix }}-{{ host }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ host }}-dev
+{% endif %}
+{% endfor %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{% for host in gitr_enabled_hosts %}
+{{ kdevops_host_prefix }}-{{ host }}
+{% endfor %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{% for host in gitr_enabled_hosts %}
+{{ kdevops_host_prefix }}-{{ host }}-dev
+{% endfor %}
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[gitr]
+{% for host in gitr_enabled_hosts %}
+{{ kdevops_host_prefix }}-{{ host }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ host }}-dev
+{% endif %}
+{% endfor %}
+
+[gitr:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
diff --git a/playbooks/roles/gen_hosts/templates/workflows/linux.j2 b/playbooks/roles/gen_hosts/templates/workflows/linux.j2
new file mode 100644
index 00000000..5d9ebb67
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/linux.j2
@@ -0,0 +1,110 @@
+{# Template for Linux custom kernel workflow #}
+[all]
+localhost ansible_connection=local
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }} ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
+{{ kdevops_host_prefix }}
+{% endif %}
+{% if kdevops_baseline_and_dev == True %}
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }}-dev ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
+{{ kdevops_host_prefix }}-dev
+{% endif %}
+{% endif %}
+{% if kdevops_enable_iscsi %}
+{{ kdevops_host_prefix }}-iscsi
+{% endif %}
+{% if kdevops_nfsd_enable %}
+{{ kdevops_host_prefix }}-nfsd
+{% endif %}
+{% if kdevops_smbd_enable|default(false) %}
+{{ kdevops_host_prefix }}-smbd
+{% endif %}
+{% if kdevops_krb5kdc_enable|default(false) %}
+{{ kdevops_host_prefix }}-kdc
+{% endif %}
+
+[all:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }} ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
+{{ kdevops_host_prefix }}
+{% endif %}
+
+[baseline:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }}-dev ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
+{{ kdevops_host_prefix }}-dev
+{% endif %}
+
+[dev:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+[linux]
+{{ kdevops_host_prefix }}
+{% if kdevops_baseline_and_dev == True %}
+{{ kdevops_host_prefix }}-dev
+{% endif %}
+
+[linux:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_enable_iscsi %}
+[iscsi]
+{{ kdevops_host_prefix }}-iscsi
+
+[iscsi:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+{% if kdevops_nfsd_enable %}
+[nfsd]
+{{ kdevops_host_prefix }}-nfsd
+
+[nfsd:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+{% if kdevops_smbd_enable|default(false) %}
+[smbd]
+{{ kdevops_host_prefix }}-smbd
+
+[smbd:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+{% if kdevops_krb5kdc_enable|default(false) %}
+[kdc]
+{{ kdevops_host_prefix }}-kdc
+
+[kdc:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+[service]
+{% if kdevops_enable_iscsi %}
+{{ kdevops_host_prefix }}-iscsi
+{% endif %}
+{% if kdevops_nfsd_enable %}
+{{ kdevops_host_prefix }}-nfsd
+{% endif %}
+{% if kdevops_smbd_enable|default(false) %}
+{{ kdevops_host_prefix }}-smbd
+{% endif %}
+{% if kdevops_krb5kdc_enable|default(false) %}
+{{ kdevops_host_prefix }}-kdc
+{% endif %}
+
+[service:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
diff --git a/playbooks/roles/gen_hosts/templates/workflows/ltp.j2 b/playbooks/roles/gen_hosts/templates/workflows/ltp.j2
new file mode 100644
index 00000000..fb120828
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/ltp.j2
@@ -0,0 +1,41 @@
+{# Workflow template for ltp #}
+[all]
+localhost ansible_connection=local
+{% for host in ltp_enabled_hosts %}
+{{ kdevops_host_prefix }}-{{ host }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ host }}-dev
+{% endif %}
+{% endfor %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{% for host in ltp_enabled_hosts %}
+{{ kdevops_host_prefix }}-{{ host }}
+{% endfor %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{% for host in ltp_enabled_hosts %}
+{{ kdevops_host_prefix }}-{{ host }}-dev
+{% endfor %}
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[ltp]
+{% for host in ltp_enabled_hosts %}
+{{ kdevops_host_prefix }}-{{ host }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ host }}-dev
+{% endif %}
+{% endfor %}
+
+[ltp:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
diff --git a/playbooks/roles/gen_hosts/templates/workflows/mix.j2 b/playbooks/roles/gen_hosts/templates/workflows/mix.j2
new file mode 100644
index 00000000..86619309
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/mix.j2
@@ -0,0 +1,62 @@
+{# Default template for non-workflow setups #}
+[all]
+localhost ansible_connection=local
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }} ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
+{{ kdevops_host_prefix }}
+{% endif %}
+{% if kdevops_baseline_and_dev == True %}
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }}-dev ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
+{{ kdevops_host_prefix }}-dev
+{% endif %}
+{% endif %}
+{% if kdevops_enable_iscsi %}
+{{ kdevops_host_prefix }}-iscsi
+{% endif %}
+{% if kdevops_nfsd_enable %}
+{{ kdevops_host_prefix }}-nfsd
+{% endif %}
+[all:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+[baseline]
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }} ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
+{{ kdevops_host_prefix }}
+{% endif %}
+[baseline:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+[dev]
+{% if kdevops_baseline_and_dev %}
+{% if kdevops_enable_nixos|default(false) %}
+{{ kdevops_host_prefix }}-dev ansible_python_interpreter=/run/current-system/sw/bin/python3
+{% else %}
+{{ kdevops_host_prefix }}-dev
+{% endif %}
+{% endif %}
+[dev:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+{% if kdevops_enable_iscsi %}
+[iscsi]
+{{ kdevops_host_prefix }}-iscsi
+[iscsi:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+{% endif %}
+{% if kdevops_nfsd_enable %}
+[nfsd]
+{{ kdevops_host_prefix }}-nfsd
+[nfsd:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+{% endif %}
+[service]
+{% if kdevops_enable_iscsi %}
+{{ kdevops_host_prefix }}-iscsi
+{% endif %}
+{% if kdevops_nfsd_enable %}
+{{ kdevops_host_prefix }}-nfsd
+{% endif %}
+[service:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
diff --git a/playbooks/roles/gen_hosts/templates/workflows/mmtests.j2 b/playbooks/roles/gen_hosts/templates/workflows/mmtests.j2
new file mode 100644
index 00000000..d796cbe6
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/mmtests.j2
@@ -0,0 +1,77 @@
+{# Workflow template for mmtests #}
+[all]
+localhost ansible_connection=local
+{% if mmtests_enabled_test_types %}
+{% for test_type in mmtests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ test_type }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ test_type }}-dev
+{% endif %}
+{% endfor %}
+{% else %}
+{{ kdevops_host_prefix }}-mmtests
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-mmtests-dev
+{% endif %}
+{% endif %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{% if mmtests_enabled_test_types %}
+{% for test_type in mmtests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ test_type }}
+{% endfor %}
+{% else %}
+{{ kdevops_host_prefix }}-mmtests
+{% endif %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{% if mmtests_enabled_test_types %}
+{% for test_type in mmtests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ test_type }}-dev
+{% endfor %}
+{% else %}
+{{ kdevops_host_prefix }}-mmtests-dev
+{% endif %}
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[mmtests]
+{% if mmtests_enabled_test_types %}
+{% for test_type in mmtests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ test_type }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ test_type }}-dev
+{% endif %}
+{% endfor %}
+{% else %}
+{{ kdevops_host_prefix }}-mmtests
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-mmtests-dev
+{% endif %}
+{% endif %}
+
+[mmtests:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if mmtests_enabled_test_types %}
+{% for test_type in mmtests_enabled_test_types %}
+[mmtests_{{ test_type | replace('-', '_') }}]
+{{ kdevops_host_prefix }}-{{ test_type }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ test_type }}-dev
+{% endif %}
+
+[mmtests_{{ test_type | replace('-', '_') }}:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endfor %}
+{% endif %}
diff --git a/playbooks/roles/gen_hosts/templates/workflows/nfstest.j2 b/playbooks/roles/gen_hosts/templates/workflows/nfstest.j2
new file mode 100644
index 00000000..34aa7dfe
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/nfstest.j2
@@ -0,0 +1,41 @@
+{# Workflow template for nfstest #}
+[all]
+localhost ansible_connection=local
+{% for host in nfstest_enabled_hosts %}
+{{ kdevops_host_prefix }}-{{ host }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ host }}-dev
+{% endif %}
+{% endfor %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{% for host in nfstest_enabled_hosts %}
+{{ kdevops_host_prefix }}-{{ host }}
+{% endfor %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{% for host in nfstest_enabled_hosts %}
+{{ kdevops_host_prefix }}-{{ host }}-dev
+{% endfor %}
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[nfstest]
+{% for host in nfstest_enabled_hosts %}
+{{ kdevops_host_prefix }}-{{ host }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ host }}-dev
+{% endif %}
+{% endfor %}
+
+[nfstest:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
diff --git a/playbooks/roles/gen_hosts/templates/workflows/pynfs.j2 b/playbooks/roles/gen_hosts/templates/workflows/pynfs.j2
new file mode 100644
index 00000000..6145b1a0
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/pynfs.j2
@@ -0,0 +1,7 @@
+{# Workflow template for pynfs #}
+[all]
+localhost ansible_connection=local
+write-your-own-template-for-pynfs-workflow
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
diff --git a/playbooks/roles/gen_hosts/templates/workflows/reboot-limit.j2 b/playbooks/roles/gen_hosts/templates/workflows/reboot-limit.j2
new file mode 100644
index 00000000..07bd6f80
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/reboot-limit.j2
@@ -0,0 +1,33 @@
+{# Workflow template for reboot-limit #}
+[all]
+localhost ansible_connection=local
+{{ kdevops_host_prefix }}-reboot-limit
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-reboot-limit-dev
+{% endif %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{{ kdevops_host_prefix }}-reboot-limit
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{{ kdevops_host_prefix }}-reboot-limit-dev
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[reboot-limit]
+{{ kdevops_host_prefix }}-reboot-limit
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-reboot-limit-dev
+{% endif %}
+
+[reboot-limit:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
diff --git a/playbooks/roles/gen_hosts/templates/workflows/selftests.j2 b/playbooks/roles/gen_hosts/templates/workflows/selftests.j2
new file mode 100644
index 00000000..4ef598c7
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/selftests.j2
@@ -0,0 +1,53 @@
+{# Workflow template for selftests #}
+[all]
+localhost ansible_connection=local
+{% for test_type in selftests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ test_type }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ test_type }}-dev
+{% endif %}
+{% endfor %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{% for test_type in selftests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ test_type }}
+{% endfor %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{% for test_type in selftests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ test_type }}-dev
+{% endfor %}
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[selftests]
+{% for test_type in selftests_enabled_test_types %}
+{{ kdevops_host_prefix }}-{{ test_type }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ test_type }}-dev
+{% endif %}
+{% endfor %}
+
+[selftests:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% for test_type in selftests_enabled_test_types %}
+[selftests_{{ test_type | replace('-', '_') }}]
+{{ kdevops_host_prefix }}-{{ test_type }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ test_type }}-dev
+{% endif %}
+
+[selftests_{{ test_type | replace('-', '_') }}:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endfor %}
diff --git a/playbooks/roles/gen_hosts/templates/workflows/sysbench.j2 b/playbooks/roles/gen_hosts/templates/workflows/sysbench.j2
new file mode 100644
index 00000000..34f22a83
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/sysbench.j2
@@ -0,0 +1,53 @@
+{# Workflow template for sysbench #}
+[all]
+localhost ansible_connection=local
+{% for test in enabled_sysbench_tests %}
+{{ kdevops_host_prefix }}-{{ test }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ test }}-dev
+{% endif %}
+{% endfor %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{% for test in enabled_sysbench_tests %}
+{{ kdevops_host_prefix }}-{{ test }}
+{% endfor %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{% for test in enabled_sysbench_tests %}
+{{ kdevops_host_prefix }}-{{ test }}-dev
+{% endfor %}
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[sysbench]
+{% for test in enabled_sysbench_tests %}
+{{ kdevops_host_prefix }}-{{ test }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ test }}-dev
+{% endif %}
+{% endfor %}
+
+[sysbench:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% for test in enabled_sysbench_tests %}
+[sysbench_{{ test | replace('-', '_') }}]
+{{ kdevops_host_prefix }}-{{ test }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ test }}-dev
+{% endif %}
+
+[sysbench_{{ test | replace('-', '_') }}:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endfor %}
diff --git a/workflows/ai/Makefile b/workflows/ai/Makefile
index 1c297edd..7e9b8af2 100644
--- a/workflows/ai/Makefile
+++ b/workflows/ai/Makefile
@@ -3,9 +3,6 @@ PHONY += ai-setup ai-uninstall ai-destroy ai-help-menu
 PHONY += ai-tests ai-tests-baseline ai-tests-dev
 PHONY += ai-tests-results
 
-ifeq (y,$(CONFIG_WORKFLOWS_DEDICATED_WORKFLOW))
-export KDEVOPS_HOSTS_TEMPLATE := hosts.j2
-endif
 
 export AI_DATA_TARGET := $(subst ",,$(CONFIG_AI_BENCHMARK_RESULTS_DIR))
 export AI_ARGS :=
diff --git a/workflows/blktests/Makefile b/workflows/blktests/Makefile
index 11cd9a65..12c0d933 100644
--- a/workflows/blktests/Makefile
+++ b/workflows/blktests/Makefile
@@ -11,9 +11,6 @@ ID=$(shell id -u)
 
 BLKTESTS_ARGS	:=
 
-ifeq (y,$(CONFIG_WORKFLOWS_DEDICATED_WORKFLOW))
-export KDEVOPS_HOSTS_TEMPLATE := blktests.j2
-endif
 
 BLKTESTS_GIT:=$(subst ",,$(CONFIG_BLKTESTS_GIT))
 BLKTESTS_DATA:=$(subst ",,$(CONFIG_BLKTESTS_DATA))
diff --git a/workflows/cxl/Makefile b/workflows/cxl/Makefile
index bb130057..6d7894a9 100644
--- a/workflows/cxl/Makefile
+++ b/workflows/cxl/Makefile
@@ -1,7 +1,5 @@
 # SPDX-License-Identifier: copyleft-next-0.3.1
 
-export KDEVOPS_HOSTS_TEMPLATE := cxl.j2
-
 CXL_ARGS :=
 CXL_ARGS += ndctl_git='$(subst ",,$(CONFIG_NDCTL_GIT))'
 CXL_ARGS += ndctl_data=\"$(subst ",,$(CONFIG_NDCTL_DATA))\"
diff --git a/workflows/demos/reboot-limit/Kconfig b/workflows/demos/reboot-limit/Kconfig
index ecafe4bd..4fdcba05 100644
--- a/workflows/demos/reboot-limit/Kconfig
+++ b/workflows/demos/reboot-limit/Kconfig
@@ -8,6 +8,11 @@ config WORKFLOWS_REBOOT_LIMIT
 	  you really have no idea clearly if you can reboot without issues
 	  forever and may end up with a false positive on an unidentified issue.
 
+config KDEVOPS_WORKFLOW_NAME
+	string
+	output yaml
+	default "reboot-limit" if WORKFLOWS_REBOOT_LIMIT
+
 if WORKFLOWS_REBOOT_LIMIT
 
 menu "Configure and reboot-limit"
diff --git a/workflows/fio-tests/Makefile b/workflows/fio-tests/Makefile
index 5eb2ccbd..218cfbfc 100644
--- a/workflows/fio-tests/Makefile
+++ b/workflows/fio-tests/Makefile
@@ -1,6 +1,3 @@
-ifeq (y,$(CONFIG_WORKFLOWS_DEDICATED_WORKFLOW))
-export KDEVOPS_HOSTS_TEMPLATE := fio-tests.j2
-endif
 
 fio-tests:
 	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
diff --git a/workflows/fstests/Makefile b/workflows/fstests/Makefile
index d2e5c636..ef6347ac 100644
--- a/workflows/fstests/Makefile
+++ b/workflows/fstests/Makefile
@@ -8,9 +8,6 @@ FSTESTS_BASELINE_EXTRA :=
 
 export FSTYP:=$(subst ",,$(CONFIG_FSTESTS_FSTYP))
 
-ifeq (y,$(CONFIG_WORKFLOWS_DEDICATED_WORKFLOW))
-export KDEVOPS_HOSTS_TEMPLATE := fstests.j2
-endif
 
 FSTESTS_ARGS += fstests_fstyp='$(FSTYP)'
 FS_CONFIG='$(FSTYP)/$(FSTYP).config'
diff --git a/workflows/gitr/Makefile b/workflows/gitr/Makefile
index c685395a..b1b7fe37 100644
--- a/workflows/gitr/Makefile
+++ b/workflows/gitr/Makefile
@@ -1,6 +1,3 @@
-ifeq (y,$(CONFIG_WORKFLOWS_DEDICATED_WORKFLOW))
-export KDEVOPS_HOSTS_TEMPLATE := gitr.j2
-endif
 
 GITR_MNT:=$(subst ",,$(CONFIG_GITR_MNT))
 GITR_ARGS += gitr_mnt=$(GITR_MNT)
diff --git a/workflows/linux/Makefile b/workflows/linux/Makefile
index 30b123f9..1ab9d55d 100644
--- a/workflows/linux/Makefile
+++ b/workflows/linux/Makefile
@@ -14,7 +14,6 @@ TREE_CONFIG:=config-$(TREE_REF)-pure-iomap
 endif
 
 ifeq (y,$(CONFIG_BOOTLINUX_BUILDER))
-KDEVOPS_HOSTS_TEMPLATE=builder.j2
 endif
 
 # Describes the Linux clone
diff --git a/workflows/ltp/Makefile b/workflows/ltp/Makefile
index 767465bc..592c4f2b 100644
--- a/workflows/ltp/Makefile
+++ b/workflows/ltp/Makefile
@@ -1,6 +1,3 @@
-ifeq (y,$(CONFIG_WORKFLOWS_DEDICATED_WORKFLOW))
-export KDEVOPS_HOSTS_TEMPLATE := ltp.j2
-endif
 
 LTP_REPO:=$(subst ",,$(CONFIG_LTP_REPO))
 LTP_ARGS += ltp_repo=$(LTP_REPO)
diff --git a/workflows/mmtests/Makefile b/workflows/mmtests/Makefile
index b65d256b..97934dc0 100644
--- a/workflows/mmtests/Makefile
+++ b/workflows/mmtests/Makefile
@@ -1,8 +1,5 @@
 MMTESTS_ARGS	:=
 
-ifeq (y,$(CONFIG_WORKFLOWS_DEDICATED_WORKFLOW))
-export KDEVOPS_HOSTS_TEMPLATE := mmtests.j2
-endif
 
 mmtests:
 	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
diff --git a/workflows/nfstest/Makefile b/workflows/nfstest/Makefile
index bbfd3f64..fca7a51a 100644
--- a/workflows/nfstest/Makefile
+++ b/workflows/nfstest/Makefile
@@ -1,6 +1,3 @@
-ifeq (y,$(CONFIG_WORKFLOWS_DEDICATED_WORKFLOW))
-export KDEVOPS_HOSTS_TEMPLATE := nfstest.j2
-endif # CONFIG_WORKFLOWS_DEDICATED_WORKFLOW
 
 ifeq (y,$(CONFIG_NFSTEST_USE_KDEVOPS_NFSD))
 NFSTEST_ARGS += nfstest_nfs_server_host='$(subst ",,$(CONFIG_KDEVOPS_HOSTS_PREFIX))-nfsd'
diff --git a/workflows/pynfs/Makefile b/workflows/pynfs/Makefile
index 2f3ff97b..e0da0cf5 100644
--- a/workflows/pynfs/Makefile
+++ b/workflows/pynfs/Makefile
@@ -1,6 +1,3 @@
-ifeq (y,$(CONFIG_WORKFLOWS_DEDICATED_WORKFLOW))
-export KDEVOPS_HOSTS_TEMPLATE := pynfs.j2
-endif
 
 PYNFS_GIT:=$(subst ",,$(CONFIG_PYNFS_GIT))
 PYNFS_ARGS += pynfs_git=$(PYNFS_GIT)
diff --git a/workflows/selftests/Makefile b/workflows/selftests/Makefile
index d3b7044c..b040647e 100644
--- a/workflows/selftests/Makefile
+++ b/workflows/selftests/Makefile
@@ -2,9 +2,6 @@
 
 SELFTESTS_ARGS :=
 
-ifeq (y,$(CONFIG_WORKFLOWS_DEDICATED_WORKFLOW))
-export KDEVOPS_HOSTS_TEMPLATE := selftests.j2
-endif
 
 SELFTESTS_DYNAMIC_RUNTIME_VARS := "kdevops_run_selftests": True
 
diff --git a/workflows/sysbench/Makefile b/workflows/sysbench/Makefile
index daf7bc75..66e594d3 100644
--- a/workflows/sysbench/Makefile
+++ b/workflows/sysbench/Makefile
@@ -1,8 +1,5 @@
 PHONY += sysbench sysbench-test sysbench-telemetry sysbench-help-menu
 
-ifeq (y,$(CONFIG_WORKFLOWS_DEDICATED_WORKFLOW))
-export KDEVOPS_HOSTS_TEMPLATE := sysbench.j2
-endif
 
 TAGS_SYSBENCH_RUN := db_start
 TAGS_SYSBENCH_RUN += db_test_connection
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v3 2/3] declared_hosts: add support for pre-existing infrastructure
  2025-09-02 23:53 [PATCH v3 0/3] declared hosts support Luis Chamberlain
  2025-09-02 23:53 ` [PATCH v3 1/3] gen_hosts: use kdevops_workflow_name directly for template selection Luis Chamberlain
@ 2025-09-02 23:53 ` Luis Chamberlain
  2025-09-03  9:02   ` Daniel Gomez
  2025-09-02 23:53 ` [PATCH v3 3/3] minio: add MinIO Warp S3 benchmarking with declared hosts support Luis Chamberlain
  2 siblings, 1 reply; 5+ messages in thread
From: Luis Chamberlain @ 2025-09-02 23:53 UTC (permalink / raw)
  To: Chuck Lever, Daniel Gomez, kdevops
  Cc: hui81.qi, kundan.kumar, Luis Chamberlain

This adds support for using pre-existing infrastructure (bare metal servers,
pre-provisioned VMs, cloud instances) that users already have SSH access to,
bypassing the kdevops bringup process.

We borrow the DECLARE_* foo practice from the Linux kernel to ensure the
user will declare the hosts they already have set up with:

  make DECLARE_HOSTS="foo bar" defconfig-foo
  or
  make DECLARE_HOSTS="foo bar" menuconfig

We just skip the data partition setup, at the role level. The user
is encouraged to set DATA_PATH if they want something other than /data/
to be expected to be used. The onus is on them to ensure that the
DATA_PATH works for the user the the host is configured to ssh access
to already.

Currently no workflows are fully supported with declared hosts.
Each workflow requires individual review and testing to ensure proper
operation with pre-existing infrastructure before being enabled.

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 Makefile                                      |   7 +
 kconfigs/Kconfig.bringup                      |   8 +
 kconfigs/Kconfig.declared_hosts               |  71 ++++++
 kconfigs/workflows/Kconfig                    |  23 ++
 kconfigs/workflows/Kconfig.data_partition     |  13 +-
 playbooks/create_data_partition.yml           |   2 +
 .../create_data_partition/defaults/main.yml   |   1 +
 playbooks/roles/devconfig/defaults/main.yml   |   2 +
 playbooks/roles/devconfig/tasks/main.yml      |  25 ++
 playbooks/roles/gen_hosts/defaults/main.yml   |   1 +
 playbooks/roles/gen_hosts/tasks/main.yml      |  17 ++
 playbooks/roles/gen_hosts/templates/hosts.j2  |   6 +
 .../templates/workflows/declared-hosts.j2     | 239 ++++++++++++++++++
 13 files changed, 414 insertions(+), 1 deletion(-)
 create mode 100644 kconfigs/Kconfig.declared_hosts
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/declared-hosts.j2

diff --git a/Makefile b/Makefile
index 38147009..ad744613 100644
--- a/Makefile
+++ b/Makefile
@@ -19,6 +19,13 @@ export KDEVOPS_NODES :=
 export PYTHONUNBUFFERED=1
 export TOPDIR=./
 export TOPDIR_PATH = $(shell readlink -f $(TOPDIR))
+
+# Export CLI override variables for Kconfig to detect them
+# Note: We accept DECLARE_HOSTS but export as DECLARED_HOSTS for consistency
+ifdef DECLARE_HOSTS
+export DECLARED_HOSTS := $(DECLARE_HOSTS)
+endif
+
 include scripts/refs.Makefile
 
 KDEVOPS_NODES_ROLE_TEMPLATE_DIR :=		$(KDEVOPS_PLAYBOOKS_DIR)/roles/gen_nodes/templates
diff --git a/kconfigs/Kconfig.bringup b/kconfigs/Kconfig.bringup
index 78d01249..0b1c8805 100644
--- a/kconfigs/Kconfig.bringup
+++ b/kconfigs/Kconfig.bringup
@@ -13,6 +13,13 @@ config CLOUD_INITIALIZED
 	bool
 	default $(shell, test -f .cloud.initialized && echo y || echo n) = "y"
 
+# CLI override detection for DECLARED_HOSTS which should enable SKIP_BRINGUP
+config SKIP_BRINGUP_SET_BY_CLI
+	bool
+	default $(shell, scripts/check-cli-set-var.sh DECLARED_HOSTS)
+	select SKIP_BRINGUP
+	select KDEVOPS_USE_DECLARED_HOSTS
+
 choice
 	prompt "Node bring up method"
 	default TERRAFORM if CLOUD_INITIALIZED
@@ -85,6 +92,7 @@ config LIBVIRT
 source "kconfigs/Kconfig.guestfs"
 source "kconfigs/Kconfig.nixos"
 source "terraform/Kconfig"
+source "kconfigs/Kconfig.declared_hosts"
 if LIBVIRT
 source "kconfigs/Kconfig.libvirt"
 endif
diff --git a/kconfigs/Kconfig.declared_hosts b/kconfigs/Kconfig.declared_hosts
new file mode 100644
index 00000000..cfdae8fa
--- /dev/null
+++ b/kconfigs/Kconfig.declared_hosts
@@ -0,0 +1,71 @@
+# Configuration for declared hosts that skip bringup process
+
+config KDEVOPS_USE_DECLARED_HOSTS
+	bool "Use declared hosts (skip bringup process)"
+	select WORKFLOW_INFER_USER_AND_GROUP
+	output yaml
+	help
+	  Enable this option to use pre-existing hosts that you have already
+	  configured with SSH access. This is useful for:
+
+	  * Bare metal systems
+	  * Pre-provisioned VMs or cloud instances
+	  * Systems managed by other infrastructure tools
+
+	  When this option is enabled:
+	  - SSH keys will not be generated (assumes you already have access)
+	  - Bringup and teardown operations will be skipped
+	  - User and group settings will be inferred from the target hosts
+	  - You must provide the list of hosts in KDEVOPS_DECLARED_HOSTS
+
+	  This option automatically:
+	  - Selects WORKFLOW_INFER_USER_AND_GROUP to detect the correct
+	    user and group on the target systems
+	  - Assumes SSH access is already configured
+
+if KDEVOPS_USE_DECLARED_HOSTS
+
+config KDEVOPS_DECLARED_HOSTS
+	string "List of declared hosts"
+	output yaml
+	default "$(shell, echo ${DECLARED_HOSTS})"
+	default ""
+	help
+	  Provide a list of hostnames or IP addresses for the pre-existing
+	  systems you want to use. These hosts must already be accessible
+	  via SSH with the appropriate keys configured.
+
+	  Format: Space or comma-separated list
+	  Example: "host1 host2 host3" or "host1,host2,host3"
+
+	  These hosts will be used directly without any bringup process.
+	  Make sure you have:
+	  - SSH access configured
+	  - Required packages installed
+	  - Appropriate user permissions
+
+config KDEVOPS_DECLARED_HOSTS_PYTHON_INTERPRETER
+	string "Python interpreter path on declared hosts"
+	default "/usr/bin/python3"
+	output yaml
+	help
+	  Specify the path to the Python interpreter on the declared hosts.
+	  This is required for Ansible to function properly.
+
+	  Common values:
+	  - /usr/bin/python3 (most modern systems)
+	  - /usr/bin/python (older systems)
+	  - /usr/local/bin/python3 (custom installations)
+
+config KDEVOPS_DECLARED_HOSTS_GROUP_VARS
+	bool "Apply group variables to declared hosts"
+	default y
+	output yaml
+	help
+	  When enabled, kdevops will apply the appropriate group variables
+	  to the declared hosts based on the selected workflow.
+
+	  This ensures that declared hosts receive the same configuration
+	  variables as dynamically provisioned hosts would.
+
+endif # KDEVOPS_USE_DECLARED_HOSTS
diff --git a/kconfigs/workflows/Kconfig b/kconfigs/workflows/Kconfig
index de279b48..30d4fc5e 100644
--- a/kconfigs/workflows/Kconfig
+++ b/kconfigs/workflows/Kconfig
@@ -124,6 +124,7 @@ choice
 config KDEVOPS_WORKFLOW_DEDICATE_FSTESTS
 	bool "fstests"
 	select KDEVOPS_WORKFLOW_ENABLE_FSTESTS
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	help
 	  This will dedicate your configuration only to fstests.
 
@@ -139,12 +140,14 @@ config KDEVOPS_WORKFLOW_DEDICATE_FSTESTS
 
 config KDEVOPS_WORKFLOW_DEDICATE_BLKTESTS
 	bool "blktests"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_BLKTESTS
 	help
 	  This will dedicate your configuration only to blktests.
 
 config KDEVOPS_WORKFLOW_DEDICATE_CXL
 	bool "cxl"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_CXL
 	help
 	  This will dedicate your configuration only to cxl work.
@@ -159,6 +162,7 @@ config KDEVOPS_WORKFLOW_DEDICATE_CXL
 
 config KDEVOPS_WORKFLOW_DEDICATE_PYNFS
 	bool "pynfs"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_PYNFS
 	help
 	  This will dedicate your configuration only to running pynfs
@@ -166,6 +170,7 @@ config KDEVOPS_WORKFLOW_DEDICATE_PYNFS
 
 config KDEVOPS_WORKFLOW_DEDICATE_SELFTESTS
 	bool "Linux kernel selftests"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_SELFTESTS
 	help
 	  This will dedicate your configuration only to Linux kernel
@@ -174,6 +179,7 @@ config KDEVOPS_WORKFLOW_DEDICATE_SELFTESTS
 
 config KDEVOPS_WORKFLOW_DEDICATE_GITR
 	bool "gitr"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_GITR
 	help
 	  This will dedicate your configuration to running only the
@@ -181,6 +187,7 @@ config KDEVOPS_WORKFLOW_DEDICATE_GITR
 
 config KDEVOPS_WORKFLOW_DEDICATE_LTP
 	bool "ltp"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_LTP
 	help
 	  This will dedicate your configuration to running only the
@@ -188,6 +195,7 @@ config KDEVOPS_WORKFLOW_DEDICATE_LTP
 
 config KDEVOPS_WORKFLOW_DEDICATE_NFSTEST
 	bool "nfstest"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_NFSTEST
 	help
 	  This will dedicate your configuration to running only the
@@ -195,6 +203,7 @@ config KDEVOPS_WORKFLOW_DEDICATE_NFSTEST
 
 config KDEVOPS_WORKFLOW_DEDICATE_SYSBENCH
 	bool "sysbench"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_SYSBENCH
 	help
 	  This will dedicate your configuration to running only the
@@ -202,6 +211,7 @@ config KDEVOPS_WORKFLOW_DEDICATE_SYSBENCH
 
 config KDEVOPS_WORKFLOW_DEDICATE_MMTESTS
 	bool "mmtests"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_MMTESTS
 	help
 	  This will dedicate your configuration to running only the
@@ -209,6 +219,7 @@ config KDEVOPS_WORKFLOW_DEDICATE_MMTESTS
 
 config KDEVOPS_WORKFLOW_DEDICATE_FIO_TESTS
 	bool "fio-tests"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_FIO_TESTS
 	help
 	  This will dedicate your configuration to running only the
@@ -216,6 +227,7 @@ config KDEVOPS_WORKFLOW_DEDICATE_FIO_TESTS
 
 config KDEVOPS_WORKFLOW_DEDICATE_AI
 	bool "ai"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_AI
 	help
 	  This will dedicate your configuration to running only the
@@ -264,6 +276,7 @@ config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_FSTESTS
 
 config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_BLKTESTS
 	bool "blktests"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_BLKTESTS
 	help
 	  Select this option if you are doing block layer development and want
@@ -272,6 +285,7 @@ config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_BLKTESTS
 
 config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_CXL
 	bool "cxl"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_CXL
 	help
 	  Select this option if you are doing cxl development and testing.
@@ -285,6 +299,7 @@ config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_CXL
 
 config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_PYNFS
 	bool "pynfs"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_PYNFS
 	depends on LIBVIRT || TERRAFORM_PRIVATE_NET
 	help
@@ -294,6 +309,7 @@ config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_PYNFS
 
 config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_SELFTESTS
 	bool "Linux kernel selftest"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_SELFTESTS
 	help
 	  Select this option if you are doing Linux kernel developent and
@@ -301,6 +317,7 @@ config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_SELFTESTS
 
 config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_GITR
 	bool "gitr"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_GITR
 	depends on LIBVIRT || TERRAFORM_PRIVATE_NET
 	help
@@ -309,6 +326,7 @@ config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_GITR
 
 config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_LTP
 	bool "ltp"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_LTP
 	depends on LIBVIRT || TERRAFORM_PRIVATE_NET
 	help
@@ -317,6 +335,7 @@ config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_LTP
 
 config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_NFSTEST
 	bool "nfstest"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_NFSTEST
 	depends on LIBVIRT || TERRAFORM_PRIVATE_NET
 	help
@@ -325,6 +344,7 @@ config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_NFSTEST
 
 config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_SYSBENCH
 	bool "sysbench"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_SYSBENCH
 	depends on LIBVIRT || TERRAFORM_PRIVATE_NET
 	help
@@ -333,6 +353,7 @@ config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_SYSBENCH
 
 config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_MMTESTS
 	bool "mmtests"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_MMTESTS
 	depends on LIBVIRT || TERRAFORM_PRIVATE_NET
 	help
@@ -341,6 +362,7 @@ config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_MMTESTS
 
 config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_FIO_TESTS
 	bool "fio-tests"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_FIO_TESTS
 	depends on LIBVIRT || TERRAFORM_PRIVATE_NET
 	help
@@ -349,6 +371,7 @@ config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_FIO_TESTS
 
 config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_AI
 	bool "ai"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	select KDEVOPS_WORKFLOW_ENABLE_AI
 	depends on LIBVIRT || TERRAFORM_PRIVATE_NET
 	help
diff --git a/kconfigs/workflows/Kconfig.data_partition b/kconfigs/workflows/Kconfig.data_partition
index 6b17cddf..847d4dbd 100644
--- a/kconfigs/workflows/Kconfig.data_partition
+++ b/kconfigs/workflows/Kconfig.data_partition
@@ -1,5 +1,10 @@
+config WORKFLOW_DATA_PATH_SET_BY_CLI
+	bool
+	default $(shell, scripts/check-cli-set-var.sh DATA_PATH)
+
 config WORKFLOW_DATA_DEVICE_ENABLE_CUSTOM
 	bool "Enable custom device to use to create the workflow data parition"
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	help
 	  Enable this if you want to override the default data device.
 	  Typically we have enough heuristics with kconfig to get this right
@@ -29,6 +34,7 @@ config WORKFLOW_DATA_DEVICE_CUSTOM
 
 config WORKFLOW_DATA_DEVICE
 	string
+	depends on !KDEVOPS_USE_DECLARED_HOSTS
 	default "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops0" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
 	default "/dev/disk/by-id/virtio-kdevops0" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
 	default "/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops0" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
@@ -37,16 +43,21 @@ config WORKFLOW_DATA_DEVICE
 	default TERRAFORM_AWS_DATA_VOLUME_DEVICE_FILE_NAME if TERRAFORM_AWS
 	default TERRAFORM_OCI_DATA_VOLUME_DEVICE_FILE_NAME if TERRAFORM_OCI
 	default WORKFLOW_DATA_DEVICE_CUSTOM if WORKFLOW_DATA_DEVICE_ENABLE_CUSTOM
+	default "" if KDEVOPS_USE_DECLARED_HOSTS
 
 config WORKFLOW_DATA_PATH
 	string "Directory path to place data for workflows"
-	default "/data"
+	default "$(shell, echo ${DATA_PATH})" if WORKFLOW_DATA_PATH_SET_BY_CLI
+	default "/data" if !WORKFLOW_DATA_PATH_SET_BY_CLI
 	help
 	  The data workflow is kept then in a location other than your default
 	  user home directory. Use this option to specify the path which we
 	  will use to place your workflow data. This will be the mount point
 	  for the new worfklow data partition created.
 
+	  When using declared hosts, this path should already exist on the
+	  target systems.
+
 config WORKFLOW_INFER_USER_AND_GROUP
 	bool "Infer user and group to use for the workflow data partition"
 	default y
diff --git a/playbooks/create_data_partition.yml b/playbooks/create_data_partition.yml
index b180cf4b..4336bbff 100644
--- a/playbooks/create_data_partition.yml
+++ b/playbooks/create_data_partition.yml
@@ -3,3 +3,5 @@
   hosts: baseline:dev
   roles:
     - role: create_data_partition
+  when:
+    - kdevops_use_declared_hosts|bool
diff --git a/playbooks/roles/create_data_partition/defaults/main.yml b/playbooks/roles/create_data_partition/defaults/main.yml
index 0afbb52e..c2d05bb4 100644
--- a/playbooks/roles/create_data_partition/defaults/main.yml
+++ b/playbooks/roles/create_data_partition/defaults/main.yml
@@ -1,2 +1,3 @@
 ---
 kdevops_enable_terraform: false
+kdevops_use_declared_hosts: false
diff --git a/playbooks/roles/devconfig/defaults/main.yml b/playbooks/roles/devconfig/defaults/main.yml
index 98dce312..c5c06e01 100644
--- a/playbooks/roles/devconfig/defaults/main.yml
+++ b/playbooks/roles/devconfig/defaults/main.yml
@@ -57,3 +57,5 @@ kdevops_enable_guestfs: false
 guestfs_copy_sources_from_host_to_guest: false
 distro_debian_has_hop1_sources: false
 unattended_upgrades_installed: false
+workflow_infer_user_and_group: false
+kdevops_use_declared_hosts: false
diff --git a/playbooks/roles/devconfig/tasks/main.yml b/playbooks/roles/devconfig/tasks/main.yml
index fccd1fcf..ae16a698 100644
--- a/playbooks/roles/devconfig/tasks/main.yml
+++ b/playbooks/roles/devconfig/tasks/main.yml
@@ -17,6 +17,31 @@
   ansible.builtin.setup:
   tags: always
 
+# For declared hosts, infer user and group from the target systems
+- name: Infer user on declared hosts
+  ansible.builtin.command: "whoami"
+  register: declared_host_user
+  when:
+    - kdevops_use_declared_hosts
+    - workflow_infer_user_and_group
+  changed_when: false
+
+- name: Infer group on declared hosts
+  ansible.builtin.command: "id -g -n"
+  register: declared_host_group
+  when:
+    - kdevops_use_declared_hosts
+    - workflow_infer_user_and_group
+  changed_when: false
+
+- name: Set inferred user and group for declared hosts
+  set_fact:
+    data_user: "{{ declared_host_user.stdout | default(data_user) }}"
+    data_group: "{{ declared_host_group.stdout | default(data_group) }}"
+  when:
+    - kdevops_use_declared_hosts
+    - workflow_infer_user_and_group
+
 # Update /etc/hostname first so the change gets picked up by the reboot
 # that occurs during the distro-specific tasks
 
diff --git a/playbooks/roles/gen_hosts/defaults/main.yml b/playbooks/roles/gen_hosts/defaults/main.yml
index 4a7515f9..b0b59542 100644
--- a/playbooks/roles/gen_hosts/defaults/main.yml
+++ b/playbooks/roles/gen_hosts/defaults/main.yml
@@ -31,6 +31,7 @@ kdevops_workflow_enable_fio_tests: false
 kdevops_workflow_enable_mmtests: false
 kdevops_workflow_enable_ai: false
 workflows_reboot_limit: false
+kdevops_use_declared_hosts: false
 
 is_fstests: false
 fstests_fstyp: "bogus"
diff --git a/playbooks/roles/gen_hosts/tasks/main.yml b/playbooks/roles/gen_hosts/tasks/main.yml
index 518064ed..d44566ad 100644
--- a/playbooks/roles/gen_hosts/tasks/main.yml
+++ b/playbooks/roles/gen_hosts/tasks/main.yml
@@ -10,6 +10,19 @@
       skip: true
   tags: vars
 
+- name: Parse declared hosts list when using declared hosts
+  set_fact:
+    kdevops_declared_hosts: >-
+      {%- if kdevops_declared_hosts is string -%}
+        {{ (kdevops_declared_hosts | default('')) | regex_replace(',', ' ') | split() }}
+      {%- else -%}
+        {{ kdevops_declared_hosts }}
+      {%- endif -%}
+  when:
+    - kdevops_use_declared_hosts
+    - kdevops_declared_hosts is defined
+  tags: vars
+
 - name: Get our user
   ansible.builtin.command: "whoami"
   register: my_user
@@ -75,6 +88,7 @@
   when:
     - is_fstests
     - ansible_hosts_template.stat.exists
+    - not kdevops_use_declared_hosts
 
 - name: Infer enabled blktests test section types
   ansible.builtin.set_fact:
@@ -89,6 +103,7 @@
     - kdevops_workflows_dedicated_workflow
     - kdevops_workflow_enable_blktests
     - ansible_hosts_template.stat.exists
+    - not kdevops_use_declared_hosts
 
 - name: Debug inferring block test types
   ansible.builtin.debug:
@@ -112,6 +127,7 @@
     - kdevops_workflows_dedicated_workflow
     - kdevops_workflow_enable_selftests
     - ansible_hosts_template.stat.exists
+    - not kdevops_use_declared_hosts
 
 - name: Collect dynamically supported filesystems
   vars:
@@ -155,6 +171,7 @@
     - kdevops_workflows_dedicated_workflow
     - kdevops_workflow_enable_mmtests
     - ansible_hosts_template.stat.exists
+    - not kdevops_use_declared_hosts
 
 - name: Load AI nodes configuration for multi-filesystem setup
   include_vars:
diff --git a/playbooks/roles/gen_hosts/templates/hosts.j2 b/playbooks/roles/gen_hosts/templates/hosts.j2
index be0378db..2ec18de8 100644
--- a/playbooks/roles/gen_hosts/templates/hosts.j2
+++ b/playbooks/roles/gen_hosts/templates/hosts.j2
@@ -5,4 +5,10 @@ proper identation. We don't need identation for the ansible hosts file.
 Each workflow which has its own custom ansible host file generated should use
 its own jinja2 template file and define its own ansible task for its generation.
 #}
+{% if kdevops_declared_hosts is defined and kdevops_declared_hosts %}
+{# Use declared hosts that skip bringup process - for bare metal or pre-existing infrastructure #}
+{% include 'workflows/declared-hosts.j2' %}
+{% else %}
+{# Include workflow-specific template dynamically based on workflow name #}
 {% include 'workflows/' + kdevops_workflow_name + '.j2' %}
+{% endif %}
diff --git a/playbooks/roles/gen_hosts/templates/workflows/declared-hosts.j2 b/playbooks/roles/gen_hosts/templates/workflows/declared-hosts.j2
new file mode 100644
index 00000000..e017d2c7
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/declared-hosts.j2
@@ -0,0 +1,239 @@
+{# Template for declared hosts that skip bringup process #}
+{# This template is used when users have pre-existing infrastructure like:
+   - Bare metal servers
+   - Pre-provisioned VMs
+   - Cloud instances managed outside of kdevops
+   - Any hosts with existing SSH access
+
+   The hosts are provided via kdevops_declared_hosts variable which contains
+   a list of hostnames/IPs that are already accessible via SSH.
+#}
+{# Parse declared hosts if it's a string #}
+{% if kdevops_declared_hosts is string %}
+{% set parsed_hosts = kdevops_declared_hosts | regex_replace(',', ' ') | split() %}
+{% else %}
+{% set parsed_hosts = kdevops_declared_hosts %}
+{% endif %}
+[all]
+localhost ansible_connection=local
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{% if kdevops_workflows_dedicated_workflow %}
+{# For workflows, organize hosts into baseline/dev groups for A/B testing #}
+{% if kdevops_baseline_and_dev %}
+[baseline]
+{# Odd-numbered hosts become baseline nodes #}
+{% for host in parsed_hosts %}
+{% if loop.index is odd %}
+{{ host }}
+{% endif %}
+{% endfor %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+[dev]
+{# Even-numbered hosts become dev nodes #}
+{% for host in parsed_hosts %}
+{% if loop.index is even %}
+{{ host }}
+{% endif %}
+{% endfor %}
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{% else %}
+{# Without A/B testing, all hosts are baseline #}
+[baseline]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+{% endif %}
+
+{# Add workflow-specific groups based on enabled workflow #}
+{% if workflows_reboot_limit %}
+[reboot-limit]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[reboot-limit:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{% elif kdevops_workflow_enable_fio_tests %}
+[fio_tests]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[fio_tests:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{% elif kdevops_workflow_enable_fstests %}
+[fstests]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[fstests:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{# Add per-section groups if needed #}
+{% for section in fstests_enabled_test_types|default([]) %}
+[fstests_{{ section | replace('-', '_') }}]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[fstests_{{ section | replace('-', '_') }}:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+{% endfor %}
+
+{% elif kdevops_workflow_enable_blktests %}
+[blktests]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[blktests:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{% elif kdevops_workflow_enable_selftests %}
+[selftests]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[selftests:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{% elif kdevops_workflow_enable_mmtests %}
+[mmtests]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[mmtests:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{% elif kdevops_workflow_enable_sysbench %}
+[sysbench]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[sysbench:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{% elif kdevops_workflow_enable_ai|default(false)|bool %}
+[ai]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[ai:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{% elif kdevops_workflow_enable_minio|default(false)|bool %}
+[minio]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[minio:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{% elif kdevops_workflow_enable_cxl %}
+[cxl]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[cxl:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{% elif kdevops_workflow_enable_pynfs %}
+[pynfs]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[pynfs:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{% elif kdevops_workflow_enable_gitr %}
+[gitr]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[gitr:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{% elif kdevops_workflow_enable_ltp %}
+[ltp]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[ltp:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{% elif kdevops_workflow_enable_nfstest %}
+[nfstest]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[nfstest:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+{% endif %}
+
+{% else %}
+{# Non-workflow setup - just use baseline group #}
+[baseline]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{# For non-dedicated workflows (mix mode), add fstests group if enabled #}
+{% if kdevops_workflow_enable_fstests %}
+[fstests]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[fstests:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+
+{# Add per-section groups if needed #}
+{% for section in fstests_enabled_test_types|default([]) %}
+[fstests_{{ section | replace('-', '_') }}]
+{% for host in parsed_hosts %}
+{{ host }}
+{% endfor %}
+
+[fstests_{{ section | replace('-', '_') }}:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
+{% endfor %}
+{% endif %}
+
+{% endif %}
+
+[service]
+{# Service nodes are typically not needed for declared hosts #}
+
+[service:vars]
+ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v3 3/3] minio: add MinIO Warp S3 benchmarking with declared hosts support
  2025-09-02 23:53 [PATCH v3 0/3] declared hosts support Luis Chamberlain
  2025-09-02 23:53 ` [PATCH v3 1/3] gen_hosts: use kdevops_workflow_name directly for template selection Luis Chamberlain
  2025-09-02 23:53 ` [PATCH v3 2/3] declared_hosts: add support for pre-existing infrastructure Luis Chamberlain
@ 2025-09-02 23:53 ` Luis Chamberlain
  2 siblings, 0 replies; 5+ messages in thread
From: Luis Chamberlain @ 2025-09-02 23:53 UTC (permalink / raw)
  To: Chuck Lever, Daniel Gomez, kdevops
  Cc: hui81.qi, kundan.kumar, Luis Chamberlain

The AI Milvus workflow already leverages support for minio as an object
storage solution, but we want to test minio directly. So to do so,
extract minio support into its own role and extend it with support for
its own dedicated target software S3 test suite, WARP.

This is our first workflow which goes vetted for declared hosts support.
For example, to leverage the defconfig-minio-warp-xfs for a 4k XFS
filesystem against an exisiting system foo:

make defconfig-minio-warp-xfs DECLARE_HOSTS=foo WARP_DEVICE=/dev/nvme4n1
make
make minio
make minio-warp

See the results on workflows/minio/results/

To test with 16k block size you can use:

make defconfig-minio-warp-xfs-16k DECLARE_HOSTS=foo WARP_DEVICE=/dev/nvme4n1

MinIO Warp Workflow:
- Add MinIO server deployment via Docker containers
- Implement Warp S3 benchmark suite with configurable parameters
- Support both single and more extensive benchmark modes
- Add storage configuration for XFS/Btrfs/ext4 filesystems
- Include benchmark result analysis and visualization tools

Generated-by: Claude AI
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 .gitignore                                    |   2 +
 defconfigs/minio-warp                         |  40 +
 defconfigs/minio-warp-ab                      |  41 +
 defconfigs/minio-warp-btrfs                   |  35 +
 defconfigs/minio-warp-declared-hosts          |  56 ++
 defconfigs/minio-warp-multifs                 |  74 ++
 defconfigs/minio-warp-storage                 |  65 ++
 defconfigs/minio-warp-xfs                     |  53 ++
 defconfigs/minio-warp-xfs-16k                 |  53 ++
 defconfigs/minio-warp-xfs-lbs                 |  65 ++
 kconfigs/workflows/Kconfig                    |  19 +
 playbooks/minio.yml                           |  53 ++
 playbooks/roles/ai_setup/tasks/main.yml       |  40 +-
 playbooks/roles/gen_hosts/tasks/main.yml      |  15 +
 .../gen_hosts/templates/workflows/minio.j2    | 173 ++++
 playbooks/roles/gen_nodes/tasks/main.yml      | 116 +++
 playbooks/roles/minio_destroy/tasks/main.yml  |  34 +
 playbooks/roles/minio_install/tasks/main.yml  |  61 ++
 playbooks/roles/minio_results/tasks/main.yml  |  86 ++
 playbooks/roles/minio_setup/defaults/main.yml |  16 +
 playbooks/roles/minio_setup/tasks/main.yml    | 100 ++
 .../roles/minio_uninstall/tasks/main.yml      |  17 +
 playbooks/roles/minio_warp_run/tasks/main.yml | 249 +++++
 .../templates/warp_config.json.j2             |  14 +
 workflows/Makefile                            |   4 +
 workflows/minio/Kconfig                       |  23 +
 workflows/minio/Kconfig.docker                |  66 ++
 workflows/minio/Kconfig.storage               | 364 ++++++++
 workflows/minio/Kconfig.warp                  | 141 +++
 workflows/minio/Makefile                      |  76 ++
 .../minio/scripts/analyze_warp_results.py     | 858 ++++++++++++++++++
 .../minio/scripts/generate_warp_report.py     | 404 +++++++++
 .../minio/scripts/run_benchmark_suite.sh      | 116 +++
 33 files changed, 3503 insertions(+), 26 deletions(-)
 create mode 100644 defconfigs/minio-warp
 create mode 100644 defconfigs/minio-warp-ab
 create mode 100644 defconfigs/minio-warp-btrfs
 create mode 100644 defconfigs/minio-warp-declared-hosts
 create mode 100644 defconfigs/minio-warp-multifs
 create mode 100644 defconfigs/minio-warp-storage
 create mode 100644 defconfigs/minio-warp-xfs
 create mode 100644 defconfigs/minio-warp-xfs-16k
 create mode 100644 defconfigs/minio-warp-xfs-lbs
 create mode 100644 playbooks/minio.yml
 create mode 100644 playbooks/roles/gen_hosts/templates/workflows/minio.j2
 create mode 100644 playbooks/roles/minio_destroy/tasks/main.yml
 create mode 100644 playbooks/roles/minio_install/tasks/main.yml
 create mode 100644 playbooks/roles/minio_results/tasks/main.yml
 create mode 100644 playbooks/roles/minio_setup/defaults/main.yml
 create mode 100644 playbooks/roles/minio_setup/tasks/main.yml
 create mode 100644 playbooks/roles/minio_uninstall/tasks/main.yml
 create mode 100644 playbooks/roles/minio_warp_run/tasks/main.yml
 create mode 100644 playbooks/roles/minio_warp_run/templates/warp_config.json.j2
 create mode 100644 workflows/minio/Kconfig
 create mode 100644 workflows/minio/Kconfig.docker
 create mode 100644 workflows/minio/Kconfig.storage
 create mode 100644 workflows/minio/Kconfig.warp
 create mode 100644 workflows/minio/Makefile
 create mode 100755 workflows/minio/scripts/analyze_warp_results.py
 create mode 100755 workflows/minio/scripts/generate_warp_report.py
 create mode 100755 workflows/minio/scripts/run_benchmark_suite.sh

diff --git a/.gitignore b/.gitignore
index 09d2ae33..720c94b6 100644
--- a/.gitignore
+++ b/.gitignore
@@ -89,6 +89,8 @@ playbooks/roles/linux-mirror/linux-mirror-systemd/mirrors.yaml
 #   yet.
 workflows/selftests/results/
 
+workflows/minio/results/
+
 workflows/linux/refs/default/Kconfig.linus
 workflows/linux/refs/default/Kconfig.next
 workflows/linux/refs/default/Kconfig.stable
diff --git a/defconfigs/minio-warp b/defconfigs/minio-warp
new file mode 100644
index 00000000..473ffeae
--- /dev/null
+++ b/defconfigs/minio-warp
@@ -0,0 +1,40 @@
+#
+# MinIO Warp S3 benchmarking configuration
+#
+# Automatically generated file; DO NOT EDIT.
+# kdevops 5.0.2 Configuration
+#
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+#
+# MinIO Docker Configuration
+#
+CONFIG_MINIO_CONTAINER_IMAGE_STRING="minio/minio:RELEASE.2024-01-16T16-07-38Z"
+CONFIG_MINIO_CONTAINER_NAME="minio-warp-server"
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-warp-network"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+
+#
+# Warp Benchmark Configuration
+#
+CONFIG_MINIO_WARP_BENCHMARK_MIXED=y
+CONFIG_MINIO_WARP_BENCHMARK_TYPE="mixed"
+CONFIG_MINIO_WARP_DURATION="5m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=10
+CONFIG_MINIO_WARP_OBJECT_SIZE="1MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=y
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
diff --git a/defconfigs/minio-warp-ab b/defconfigs/minio-warp-ab
new file mode 100644
index 00000000..f20142d1
--- /dev/null
+++ b/defconfigs/minio-warp-ab
@@ -0,0 +1,41 @@
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+# A/B Testing Configuration
+CONFIG_KDEVOPS_BASELINE_AND_DEV=y
+
+# MinIO Configuration
+CONFIG_MINIO_ENABLE=y
+CONFIG_MINIO_CONTAINER_IMAGE="minio/minio:latest"
+CONFIG_MINIO_CONTAINER_NAME="minio-server"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-network"
+
+# Warp Benchmark Configuration - Comprehensive Suite
+CONFIG_MINIO_WARP_BENCHMARK_MIXED=y
+CONFIG_MINIO_WARP_DURATION="2m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=20
+CONFIG_MINIO_WARP_OBJECT_SIZE="10MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=n
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
+
+# Enable web UI for monitoring
+CONFIG_MINIO_WARP_ENABLE_WEB_UI=y
+CONFIG_MINIO_WARP_WEB_UI_PORT=7762
+
+# Node configuration for A/B testing
+CONFIG_KDEVOPS_HOSTS_TEMPLATE="hosts.j2"
+CONFIG_KDEVOPS_NODES_TEMPLATE="nodes.j2"
+CONFIG_KDEVOPS_PLAYBOOK_DIR="playbooks"
+CONFIG_KDEVOPS_ANSIBLE_INVENTORY_FILE="hosts"
+CONFIG_KDEVOPS_NODES="nodes.yaml"
\ No newline at end of file
diff --git a/defconfigs/minio-warp-btrfs b/defconfigs/minio-warp-btrfs
new file mode 100644
index 00000000..4efeea7b
--- /dev/null
+++ b/defconfigs/minio-warp-btrfs
@@ -0,0 +1,35 @@
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+# MinIO Configuration for Btrfs testing
+CONFIG_MINIO_ENABLE=y
+CONFIG_MINIO_CONTAINER_IMAGE="minio/minio:latest"
+CONFIG_MINIO_CONTAINER_NAME="minio-server"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+
+# Configure Btrfs filesystem for MinIO storage
+CONFIG_MINIO_USE_CUSTOM_FILESYSTEM=y
+CONFIG_MINIO_STORAGE_FSTYPE="btrfs"
+CONFIG_MINIO_STORAGE_FS_OPTS="--nodesize 16k"
+CONFIG_MINIO_STORAGE_LABEL="minio-btrfs"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-network"
+
+# Comprehensive benchmark suite
+CONFIG_MINIO_WARP_RUN_COMPREHENSIVE_SUITE=y
+CONFIG_MINIO_WARP_DURATION="2m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=20
+CONFIG_MINIO_WARP_OBJECT_SIZE="10MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=n
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
diff --git a/defconfigs/minio-warp-declared-hosts b/defconfigs/minio-warp-declared-hosts
new file mode 100644
index 00000000..acf1e648
--- /dev/null
+++ b/defconfigs/minio-warp-declared-hosts
@@ -0,0 +1,56 @@
+#
+# MinIO Warp S3 benchmarking with declared hosts (bare metal or pre-existing infrastructure)
+#
+# Automatically generated file; DO NOT EDIT.
+# kdevops 5.0.2 Configuration
+#
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+# Skip bringup for declared hosts
+CONFIG_SKIP_BRINGUP=y
+CONFIG_KDEVOPS_USE_DECLARED_HOSTS=y
+
+#
+# MinIO Docker Configuration
+#
+CONFIG_MINIO_CONTAINER_IMAGE_STRING="minio/minio:RELEASE.2024-01-16T16-07-38Z"
+CONFIG_MINIO_CONTAINER_NAME="minio-warp-server"
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-warp-network"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+
+#
+# MinIO Storage Configuration
+#
+CONFIG_MINIO_STORAGE_ENABLE=y
+CONFIG_MINIO_MOUNT_POINT="/data/minio"
+CONFIG_MINIO_FSTYPE_XFS=y
+CONFIG_MINIO_FSTYPE="xfs"
+CONFIG_MINIO_XFS_BLOCKSIZE_16K=y
+CONFIG_MINIO_XFS_BLOCKSIZE=16384
+CONFIG_MINIO_XFS_SECTORSIZE_4K=y
+CONFIG_MINIO_XFS_SECTORSIZE=4096
+CONFIG_MINIO_XFS_MKFS_OPTS=""
+
+#
+# Warp Benchmark Configuration
+#
+CONFIG_MINIO_WARP_RUN_COMPREHENSIVE_SUITE=y
+CONFIG_MINIO_WARP_DURATION="5m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=10
+CONFIG_MINIO_WARP_OBJECT_SIZE="1MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=y
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
\ No newline at end of file
diff --git a/defconfigs/minio-warp-multifs b/defconfigs/minio-warp-multifs
new file mode 100644
index 00000000..8316a3fe
--- /dev/null
+++ b/defconfigs/minio-warp-multifs
@@ -0,0 +1,74 @@
+#
+# MinIO Warp S3 benchmarking with multi-filesystem testing
+#
+# Automatically generated file; DO NOT EDIT.
+# kdevops 5.0.2 Configuration
+#
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+#
+# MinIO Docker Configuration
+#
+CONFIG_MINIO_CONTAINER_IMAGE_STRING="minio/minio:RELEASE.2024-01-16T16-07-38Z"
+CONFIG_MINIO_CONTAINER_NAME="minio-warp-server"
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-warp-network"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+
+#
+# MinIO Storage Configuration with Multi-filesystem Testing
+#
+CONFIG_MINIO_STORAGE_ENABLE=y
+CONFIG_MINIO_MOUNT_POINT="/data/minio"
+CONFIG_MINIO_ENABLE_MULTIFS_TESTING=y
+
+# XFS configurations
+CONFIG_MINIO_MULTIFS_TEST_XFS=y
+CONFIG_MINIO_MULTIFS_XFS_4K_4KS=y
+CONFIG_MINIO_MULTIFS_XFS_16K_4KS=y
+
+# ext4 configurations
+CONFIG_MINIO_MULTIFS_TEST_EXT4=y
+CONFIG_MINIO_MULTIFS_EXT4_4K=y
+
+# btrfs configurations
+CONFIG_MINIO_MULTIFS_TEST_BTRFS=y
+CONFIG_MINIO_MULTIFS_BTRFS_DEFAULT=y
+
+CONFIG_MINIO_MULTIFS_RESULTS_DIR="/data/minio-multifs-benchmark"
+
+#
+# Warp Benchmark Configuration
+#
+CONFIG_MINIO_WARP_RUN_COMPREHENSIVE_SUITE=y
+CONFIG_MINIO_WARP_DURATION="5m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=10
+CONFIG_MINIO_WARP_OBJECT_SIZE="1MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=y
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
+
+#
+# Host Configuration
+#
+CONFIG_KDEVOPS_HOSTS_PREFIX="minio"
+
+#
+# Node configuration
+#
+CONFIG_KDEVOPS_NODES_TEMPLATE="guestfs-libvirt"
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME_SIZE_GIB=100
+CONFIG_LIBVIRT_EXTRA_NUM_DRIVES=1
diff --git a/defconfigs/minio-warp-storage b/defconfigs/minio-warp-storage
new file mode 100644
index 00000000..fc2a8371
--- /dev/null
+++ b/defconfigs/minio-warp-storage
@@ -0,0 +1,65 @@
+#
+# MinIO Warp S3 benchmarking with dedicated storage configuration
+#
+# Automatically generated file; DO NOT EDIT.
+# kdevops 5.0.2 Configuration
+#
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+#
+# MinIO Docker Configuration
+#
+CONFIG_MINIO_CONTAINER_IMAGE_STRING="minio/minio:RELEASE.2024-01-16T16-07-38Z"
+CONFIG_MINIO_CONTAINER_NAME="minio-warp-server"
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-warp-network"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+
+#
+# MinIO Storage Configuration
+#
+CONFIG_MINIO_STORAGE_ENABLE=y
+CONFIG_MINIO_MOUNT_POINT="/data/minio"
+CONFIG_MINIO_FSTYPE_XFS=y
+CONFIG_MINIO_FSTYPE="xfs"
+CONFIG_MINIO_XFS_BLOCKSIZE_4K=y
+CONFIG_MINIO_XFS_BLOCKSIZE=4096
+CONFIG_MINIO_XFS_SECTORSIZE_4K=y
+CONFIG_MINIO_XFS_SECTORSIZE=4096
+CONFIG_MINIO_XFS_MKFS_OPTS=""
+
+#
+# Warp Benchmark Configuration
+#
+CONFIG_MINIO_WARP_RUN_COMPREHENSIVE_SUITE=y
+CONFIG_MINIO_WARP_DURATION="5m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=10
+CONFIG_MINIO_WARP_OBJECT_SIZE="1MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=y
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
+
+#
+# Host Configuration
+#
+CONFIG_KDEVOPS_HOSTS_PREFIX="minio"
+
+#
+# Node configuration
+#
+CONFIG_KDEVOPS_NODES_TEMPLATE="guestfs-libvirt"
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME_SIZE_GIB=100
+CONFIG_LIBVIRT_EXTRA_NUM_DRIVES=1
diff --git a/defconfigs/minio-warp-xfs b/defconfigs/minio-warp-xfs
new file mode 100644
index 00000000..b508aba2
--- /dev/null
+++ b/defconfigs/minio-warp-xfs
@@ -0,0 +1,53 @@
+#
+# MinIO Warp S3 benchmarking with XFS 16K block size configuration
+#
+# Automatically generated file; DO NOT EDIT.
+# kdevops 5.0.2 Configuration
+#
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+#
+# MinIO Docker Configuration
+#
+CONFIG_MINIO_CONTAINER_IMAGE_STRING="minio/minio:RELEASE.2024-01-16T16-07-38Z"
+CONFIG_MINIO_CONTAINER_NAME="minio-warp-server"
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-warp-network"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+
+
+#
+# MinIO Storage Configuration - XFS with 4k blocks
+#
+CONFIG_MINIO_STORAGE_ENABLE=y
+CONFIG_MINIO_MOUNT_POINT="/data/minio"
+CONFIG_MINIO_FSTYPE_XFS=y
+CONFIG_MINIO_FSTYPE="xfs"
+CONFIG_MINIO_XFS_BLOCKSIZE_4K=y
+CONFIG_MINIO_XFS_BLOCKSIZE=4096
+CONFIG_MINIO_XFS_SECTORSIZE_4K=y
+CONFIG_MINIO_XFS_SECTORSIZE=4096
+CONFIG_MINIO_XFS_MKFS_OPTS=""
+
+#
+# Warp Benchmark Configuration
+#
+CONFIG_MINIO_WARP_RUN_COMPREHENSIVE_SUITE=y
+CONFIG_MINIO_WARP_DURATION="5m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=10
+CONFIG_MINIO_WARP_OBJECT_SIZE="1MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=y
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
diff --git a/defconfigs/minio-warp-xfs-16k b/defconfigs/minio-warp-xfs-16k
new file mode 100644
index 00000000..56cf69b6
--- /dev/null
+++ b/defconfigs/minio-warp-xfs-16k
@@ -0,0 +1,53 @@
+#
+# MinIO Warp S3 benchmarking with XFS 16K block size configuration
+#
+# Automatically generated file; DO NOT EDIT.
+# kdevops 5.0.2 Configuration
+#
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+#
+# MinIO Docker Configuration
+#
+CONFIG_MINIO_CONTAINER_IMAGE_STRING="minio/minio:RELEASE.2024-01-16T16-07-38Z"
+CONFIG_MINIO_CONTAINER_NAME="minio-warp-server"
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-warp-network"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+
+
+#
+# MinIO Storage Configuration - XFS with 16K blocks
+#
+CONFIG_MINIO_STORAGE_ENABLE=y
+CONFIG_MINIO_MOUNT_POINT="/data/minio"
+CONFIG_MINIO_FSTYPE_XFS=y
+CONFIG_MINIO_FSTYPE="xfs"
+CONFIG_MINIO_XFS_BLOCKSIZE_16K=y
+CONFIG_MINIO_XFS_BLOCKSIZE=16384
+CONFIG_MINIO_XFS_SECTORSIZE_4K=y
+CONFIG_MINIO_XFS_SECTORSIZE=4096
+CONFIG_MINIO_XFS_MKFS_OPTS=""
+
+#
+# Warp Benchmark Configuration
+#
+CONFIG_MINIO_WARP_RUN_COMPREHENSIVE_SUITE=y
+CONFIG_MINIO_WARP_DURATION="5m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=10
+CONFIG_MINIO_WARP_OBJECT_SIZE="1MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=100
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=y
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
diff --git a/defconfigs/minio-warp-xfs-lbs b/defconfigs/minio-warp-xfs-lbs
new file mode 100644
index 00000000..2f453bef
--- /dev/null
+++ b/defconfigs/minio-warp-xfs-lbs
@@ -0,0 +1,65 @@
+#
+# MinIO Warp S3 benchmarking with XFS Large Block Size (LBS) configuration
+#
+# Automatically generated file; DO NOT EDIT.
+# kdevops 5.0.2 Configuration
+#
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP=y
+
+#
+# MinIO Docker Configuration
+#
+CONFIG_MINIO_CONTAINER_IMAGE_STRING="minio/minio:RELEASE.2024-01-16T16-07-38Z"
+CONFIG_MINIO_CONTAINER_NAME="minio-warp-server"
+CONFIG_MINIO_ACCESS_KEY="minioadmin"
+CONFIG_MINIO_SECRET_KEY="minioadmin"
+CONFIG_MINIO_DATA_PATH="/data/minio"
+CONFIG_MINIO_DOCKER_NETWORK_NAME="minio-warp-network"
+CONFIG_MINIO_API_PORT=9000
+CONFIG_MINIO_CONSOLE_PORT=9001
+CONFIG_MINIO_MEMORY_LIMIT="4g"
+
+#
+# MinIO Storage Configuration - XFS with 64K blocks (LBS)
+#
+CONFIG_MINIO_STORAGE_ENABLE=y
+CONFIG_MINIO_MOUNT_POINT="/data/minio"
+CONFIG_MINIO_FSTYPE_XFS=y
+CONFIG_MINIO_FSTYPE="xfs"
+CONFIG_MINIO_XFS_BLOCKSIZE_64K=y
+CONFIG_MINIO_XFS_BLOCKSIZE=65536
+CONFIG_MINIO_XFS_SECTORSIZE_4K=y
+CONFIG_MINIO_XFS_SECTORSIZE=4096
+CONFIG_MINIO_XFS_MKFS_OPTS=""
+
+#
+# Warp Benchmark Configuration - Large objects for LBS testing
+#
+CONFIG_MINIO_WARP_RUN_COMPREHENSIVE_SUITE=y
+CONFIG_MINIO_WARP_DURATION="10m"
+CONFIG_MINIO_WARP_CONCURRENT_REQUESTS=20
+CONFIG_MINIO_WARP_OBJECT_SIZE="10MB"
+CONFIG_MINIO_WARP_OBJECTS_PER_REQUEST=50
+CONFIG_MINIO_WARP_BUCKET_NAME="warp-benchmark-bucket"
+CONFIG_MINIO_WARP_AUTO_TERMINATE=y
+CONFIG_MINIO_WARP_ENABLE_CLEANUP=y
+CONFIG_MINIO_WARP_OUTPUT_FORMAT="json"
+
+#
+# Host Configuration
+#
+CONFIG_KDEVOPS_HOSTS_PREFIX="minio"
+
+#
+# Node configuration
+#
+CONFIG_KDEVOPS_NODES_TEMPLATE="guestfs-libvirt"
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME=y
+CONFIG_LIBVIRT_EXTRA_STORAGE_DRIVE_NVME_SIZE_GIB=200
+CONFIG_LIBVIRT_EXTRA_NUM_DRIVES=1
diff --git a/kconfigs/workflows/Kconfig b/kconfigs/workflows/Kconfig
index 30d4fc5e..1bd1dd56 100644
--- a/kconfigs/workflows/Kconfig
+++ b/kconfigs/workflows/Kconfig
@@ -233,6 +233,13 @@ config KDEVOPS_WORKFLOW_DEDICATE_AI
 	  This will dedicate your configuration to running only the
 	  AI workflow for vector database performance testing.
 
+config KDEVOPS_WORKFLOW_DEDICATE_MINIO
+	bool "minio"
+	select KDEVOPS_WORKFLOW_ENABLE_MINIO
+	help
+	  This will dedicate your configuration to running only the
+	  MinIO workflow for S3 storage benchmarking with Warp testing.
+
 endchoice
 
 config KDEVOPS_WORKFLOW_NAME
@@ -250,6 +257,7 @@ config KDEVOPS_WORKFLOW_NAME
 	default "mmtests" if KDEVOPS_WORKFLOW_DEDICATE_MMTESTS
 	default "fio-tests" if KDEVOPS_WORKFLOW_DEDICATE_FIO_TESTS
 	default "ai" if KDEVOPS_WORKFLOW_DEDICATE_AI
+	default "minio" if KDEVOPS_WORKFLOW_DEDICATE_MINIO
 
 endif
 
@@ -513,6 +521,17 @@ source "workflows/ai/Kconfig"
 endmenu
 endif # KDEVOPS_WORKFLOW_ENABLE_AI
 
+config KDEVOPS_WORKFLOW_ENABLE_MINIO
+	bool
+	output yaml
+	default y if KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_MINIO || KDEVOPS_WORKFLOW_DEDICATE_MINIO
+
+if KDEVOPS_WORKFLOW_ENABLE_MINIO
+menu "Configure and run MinIO S3 benchmarks"
+source "workflows/minio/Kconfig"
+endmenu
+endif # KDEVOPS_WORKFLOW_ENABLE_MINIO
+
 config KDEVOPS_WORKFLOW_ENABLE_SSD_STEADY_STATE
        bool "Attain SSD steady state prior to tests"
        output yaml
diff --git a/playbooks/minio.yml b/playbooks/minio.yml
new file mode 100644
index 00000000..bf80bbf4
--- /dev/null
+++ b/playbooks/minio.yml
@@ -0,0 +1,53 @@
+---
+# MinIO S3 Storage Benchmarking Playbook
+
+- name: Install MinIO and setup
+  hosts: minio
+  become: true
+  become_user: root
+  tags: ['minio_install']
+  roles:
+    - role: minio_install
+    - role: minio_setup
+      vars:
+        minio_container_image: "{{ minio_container_image_string }}"
+        minio_container_name: "{{ minio_container_name }}"
+        minio_api_port: "{{ minio_api_port }}"
+        minio_console_port: "{{ minio_console_port }}"
+        minio_access_key: "{{ minio_access_key }}"
+        minio_secret_key: "{{ minio_secret_key }}"
+        minio_data_path: "{{ minio_data_path }}"
+        minio_memory_limit: "{{ minio_memory_limit }}"
+        minio_docker_network: "{{ minio_docker_network_name }}"
+
+- name: Run MinIO Warp benchmarks
+  hosts: minio
+  become: true
+  become_user: root
+  tags: ['minio_warp']
+  roles:
+    - role: minio_warp_run
+
+- name: Uninstall MinIO
+  hosts: minio
+  become: true
+  become_user: root
+  tags: ['minio_uninstall']
+  roles:
+    - role: minio_uninstall
+
+- name: Destroy MinIO and cleanup
+  hosts: minio
+  become: true
+  become_user: root
+  tags: ['minio_destroy']
+  roles:
+    - role: minio_destroy
+
+- name: Analyze MinIO results
+  hosts: minio
+  become: true
+  become_user: root
+  tags: ['minio_results']
+  roles:
+    - role: minio_results
diff --git a/playbooks/roles/ai_setup/tasks/main.yml b/playbooks/roles/ai_setup/tasks/main.yml
index b894c964..899fcee1 100644
--- a/playbooks/roles/ai_setup/tasks/main.yml
+++ b/playbooks/roles/ai_setup/tasks/main.yml
@@ -15,7 +15,6 @@
   loop:
     - "{{ ai_docker_data_path }}"
     - "{{ ai_docker_etcd_data_path }}"
-    - "{{ ai_docker_minio_data_path }}"
   when: ai_milvus_docker | bool
   become: true
 
@@ -50,24 +49,20 @@
     memory: "{{ ai_etcd_memory_limit }}"
   when: ai_milvus_docker | bool
 
-- name: Start MinIO container
-  community.docker.docker_container:
-    name: "{{ ai_minio_container_name }}"
-    image: "{{ ai_minio_container_image_string }}"
-    state: started
-    restart_policy: unless-stopped
-    networks:
-      - name: "{{ ai_docker_network_name }}"
-    ports:
-      - "{{ ai_minio_api_port }}:9000"
-      - "{{ ai_minio_console_port }}:9001"
-    env:
-      MINIO_ACCESS_KEY: "{{ ai_minio_access_key }}"
-      MINIO_SECRET_KEY: "{{ ai_minio_secret_key }}"
-    ansible.builtin.command: server /minio_data --console-address ":9001"
-    volumes:
-      - "{{ ai_docker_minio_data_path }}:/minio_data"
-    memory: "{{ ai_minio_memory_limit }}"
+- name: Setup MinIO using shared role
+  include_role:
+    name: minio_setup
+  vars:
+    minio_container_image: "{{ ai_minio_container_image_string }}"
+    minio_container_name: "{{ ai_minio_container_name }}"
+    minio_api_port: "{{ ai_minio_api_port }}"
+    minio_console_port: "{{ ai_minio_console_port }}"
+    minio_access_key: "{{ ai_minio_access_key }}"
+    minio_secret_key: "{{ ai_minio_secret_key }}"
+    minio_data_path: "{{ ai_docker_minio_data_path }}"
+    minio_memory_limit: "{{ ai_minio_memory_limit }}"
+    minio_docker_network: "{{ ai_docker_network_name }}"
+    minio_create_network: false  # Network already created above
   when: ai_milvus_docker | bool
 
 - name: Wait for etcd to be ready
@@ -77,13 +72,6 @@
     timeout: 60
   when: ai_milvus_docker | bool
 
-- name: Wait for MinIO to be ready
-  ansible.builtin.wait_for:
-    host: localhost
-    port: "{{ ai_minio_api_port }}"
-    timeout: 60
-  when: ai_milvus_docker | bool
-
 - name: Start Milvus container
   community.docker.docker_container:
     name: "{{ ai_milvus_container_name }}"
diff --git a/playbooks/roles/gen_hosts/tasks/main.yml b/playbooks/roles/gen_hosts/tasks/main.yml
index d44566ad..83829bd6 100644
--- a/playbooks/roles/gen_hosts/tasks/main.yml
+++ b/playbooks/roles/gen_hosts/tasks/main.yml
@@ -221,6 +221,21 @@
     state: touch
     mode: "0755"
 
+- name: Generate the Ansible hosts file for a dedicated MinIO setup
+  tags: ['hosts']
+  ansible.builtin.template:
+    src: "{{ kdevops_hosts_template }}"
+    dest: "{{ ansible_cfg_inventory }}"
+    force: true
+    trim_blocks: True
+    lstrip_blocks: True
+    mode: '0644'
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_minio|default(false)|bool
+    - ansible_hosts_template.stat.exists
+    - not kdevops_use_declared_hosts|default(false)|bool
+
 - name: Verify if final host file exists
   ansible.builtin.stat:
     path: "{{ ansible_cfg_inventory }}"
diff --git a/playbooks/roles/gen_hosts/templates/workflows/minio.j2 b/playbooks/roles/gen_hosts/templates/workflows/minio.j2
new file mode 100644
index 00000000..42ba326b
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/minio.j2
@@ -0,0 +1,173 @@
+{# Workflow template for MinIO #}
+{% if minio_enable_multifs_testing|default(false)|bool %}
+{# Multi-filesystem MinIO configuration #}
+[all]
+localhost ansible_connection=local
+{% for config in minio_enabled_section_types|default([]) %}
+{{ config }}
+{% endfor %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{% for config in minio_enabled_section_types|default([]) %}
+{% if '-dev' not in config %}
+{{ config }}
+{% endif %}
+{% endfor %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{% for config in minio_enabled_section_types|default([]) %}
+{% if '-dev' in config %}
+{{ config }}
+{% endif %}
+{% endfor %}
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+[minio]
+{% for config in minio_enabled_section_types|default([]) %}
+{{ config }}
+{% endfor %}
+
+[minio:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{# Create filesystem-specific groups #}
+{% if minio_multifs_xfs_4k_4ks|default(false)|bool %}
+[minio-xfs-4k]
+{{ kdevops_host_prefix }}-minio-xfs-4k
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-xfs-4k-dev
+{% endif %}
+
+[minio-xfs-4k:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+minio_fstype = "xfs"
+minio_xfs_blocksize = 4096
+minio_xfs_sectorsize = 4096
+{% endif %}
+
+{% if minio_multifs_xfs_16k_4ks|default(false)|bool %}
+[minio-xfs-16k]
+{{ kdevops_host_prefix }}-minio-xfs-16k
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-xfs-16k-dev
+{% endif %}
+
+[minio-xfs-16k:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+minio_fstype = "xfs"
+minio_xfs_blocksize = 16384
+minio_xfs_sectorsize = 4096
+{% endif %}
+
+{% if minio_multifs_xfs_32k_4ks|default(false)|bool %}
+[minio-xfs-32k]
+{{ kdevops_host_prefix }}-minio-xfs-32k
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-xfs-32k-dev
+{% endif %}
+
+[minio-xfs-32k:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+minio_fstype = "xfs"
+minio_xfs_blocksize = 32768
+minio_xfs_sectorsize = 4096
+{% endif %}
+
+{% if minio_multifs_xfs_64k_4ks|default(false)|bool %}
+[minio-xfs-64k]
+{{ kdevops_host_prefix }}-minio-xfs-64k
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-xfs-64k-dev
+{% endif %}
+
+[minio-xfs-64k:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+minio_fstype = "xfs"
+minio_xfs_blocksize = 65536
+minio_xfs_sectorsize = 4096
+{% endif %}
+
+{% if minio_multifs_ext4_4k|default(false)|bool %}
+[minio-ext4-4k]
+{{ kdevops_host_prefix }}-minio-ext4-4k
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-ext4-4k-dev
+{% endif %}
+
+[minio-ext4-4k:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+minio_fstype = "ext4"
+minio_ext4_mkfs_opts = "-F"
+{% endif %}
+
+{% if minio_multifs_ext4_16k_bigalloc|default(false)|bool %}
+[minio-ext4-16k-bigalloc]
+{{ kdevops_host_prefix }}-minio-ext4-16k-bigalloc
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-ext4-16k-bigalloc-dev
+{% endif %}
+
+[minio-ext4-16k-bigalloc:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+minio_fstype = "ext4"
+minio_ext4_mkfs_opts = "-F -O bigalloc -C 16384"
+{% endif %}
+
+{% if minio_multifs_btrfs_default|default(false)|bool %}
+[minio-btrfs]
+{{ kdevops_host_prefix }}-minio-btrfs
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-btrfs-dev
+{% endif %}
+
+[minio-btrfs:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+minio_fstype = "btrfs"
+minio_btrfs_mkfs_opts = "-f"
+{% endif %}
+
+{% else %}
+{# Standard single-filesystem MinIO configuration #}
+[all]
+localhost ansible_connection=local
+{{ kdevops_host_prefix }}-minio
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-dev
+{% endif %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{{ kdevops_host_prefix }}-minio
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{{ kdevops_host_prefix }}-minio-dev
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[minio]
+{{ kdevops_host_prefix }}-minio
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-minio-dev
+{% endif %}
+
+[minio:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
diff --git a/playbooks/roles/gen_nodes/tasks/main.yml b/playbooks/roles/gen_nodes/tasks/main.yml
index b1a1946f..60e7f694 100644
--- a/playbooks/roles/gen_nodes/tasks/main.yml
+++ b/playbooks/roles/gen_nodes/tasks/main.yml
@@ -790,6 +790,122 @@
     - ai_enabled_section_types is defined
     - ai_enabled_section_types | length > 0
 
+# MinIO S3 Storage Testing workflow nodes
+
+# Multi-filesystem MinIO configurations
+- name: Collect enabled MinIO multi-filesystem configurations
+  vars:
+    xfs_configs: >-
+      {{
+        [] +
+        (['xfs-4k'] if minio_multifs_xfs_4k_4ks|default(false)|bool else []) +
+        (['xfs-16k'] if minio_multifs_xfs_16k_4ks|default(false)|bool else []) +
+        (['xfs-32k'] if minio_multifs_xfs_32k_4ks|default(false)|bool else []) +
+        (['xfs-64k'] if minio_multifs_xfs_64k_4ks|default(false)|bool else [])
+      }}
+    ext4_configs: >-
+      {{
+        [] +
+        (['ext4-4k'] if minio_multifs_ext4_4k|default(false)|bool else []) +
+        (['ext4-16k-bigalloc'] if minio_multifs_ext4_16k_bigalloc|default(false)|bool else [])
+      }}
+    btrfs_configs: >-
+      {{
+        [] +
+        (['btrfs'] if minio_multifs_btrfs_default|default(false)|bool else [])
+      }}
+  set_fact:
+    minio_multifs_enabled_configs: "{{ (xfs_configs + ext4_configs + btrfs_configs) | unique }}"
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_minio|default(false)|bool
+    - minio_enable_multifs_testing|default(false)|bool
+    - ansible_nodes_template.stat.exists
+
+- name: Create MinIO nodes for each filesystem configuration (no dev)
+  vars:
+    filesystem_nodes: "{{ [kdevops_host_prefix + '-minio-'] | product(minio_multifs_enabled_configs | default([])) | map('join') | list }}"
+  set_fact:
+    minio_enabled_section_types: "{{ filesystem_nodes }}"
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_minio|default(false)|bool
+    - minio_enable_multifs_testing|default(false)|bool
+    - ansible_nodes_template.stat.exists
+    - not kdevops_baseline_and_dev
+    - minio_multifs_enabled_configs is defined
+    - minio_multifs_enabled_configs | length > 0
+
+- name: Create MinIO nodes for each filesystem configuration with dev hosts
+  vars:
+    filesystem_nodes: "{{ [kdevops_host_prefix + '-minio-'] | product(minio_multifs_enabled_configs | default([])) | map('join') | list }}"
+  set_fact:
+    minio_enabled_section_types: "{{ filesystem_nodes | product(['', '-dev']) | map('join') | list }}"
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_minio|default(false)|bool
+    - minio_enable_multifs_testing|default(false)|bool
+    - ansible_nodes_template.stat.exists
+    - kdevops_baseline_and_dev
+    - minio_multifs_enabled_configs is defined
+    - minio_multifs_enabled_configs | length > 0
+
+- name: Generate the MinIO multi-filesystem kdevops nodes file using {{ kdevops_nodes_template }} as jinja2 source template
+  tags: [ 'hosts' ]
+  vars:
+    node_template: "{{ kdevops_nodes_template | basename }}"
+    nodes: "{{ minio_enabled_section_types }}"
+    all_generic_nodes: "{{ minio_enabled_section_types }}"
+  ansible.builtin.template:
+    src: "{{ node_template }}"
+    dest: "{{ topdir_path }}/{{ kdevops_nodes }}"
+    force: true
+    mode: '0644'
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_minio|default(false)|bool
+    - minio_enable_multifs_testing|default(false)|bool
+    - ansible_nodes_template.stat.exists
+    - minio_enabled_section_types is defined
+    - minio_enabled_section_types | length > 0
+
+# Standard MinIO single filesystem nodes
+- name: Generate the MinIO kdevops nodes file using {{ kdevops_nodes_template }} as jinja2 source template
+  tags: ['hosts']
+  vars:
+    node_template: "{{ kdevops_nodes_template | basename }}"
+    nodes: "{{ [kdevops_host_prefix + '-minio'] }}"
+    all_generic_nodes: "{{ [kdevops_host_prefix + '-minio'] }}"
+  ansible.builtin.template:
+    src: "{{ node_template }}"
+    dest: "{{ topdir_path }}/{{ kdevops_nodes }}"
+    force: true
+    mode: '0644'
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_minio|default(false)|bool
+    - ansible_nodes_template.stat.exists
+    - not kdevops_baseline_and_dev
+    - not minio_enable_multifs_testing|default(false)|bool
+
+- name: Generate the MinIO kdevops nodes file with dev hosts using {{ kdevops_nodes_template }} as jinja2 source template
+  tags: ['hosts']
+  vars:
+    node_template: "{{ kdevops_nodes_template | basename }}"
+    nodes: "{{ [kdevops_host_prefix + '-minio', kdevops_host_prefix + '-minio-dev'] }}"
+    all_generic_nodes: "{{ [kdevops_host_prefix + '-minio', kdevops_host_prefix + '-minio-dev'] }}"
+  ansible.builtin.template:
+    src: "{{ node_template }}"
+    dest: "{{ topdir_path }}/{{ kdevops_nodes }}"
+    force: true
+    mode: '0644'
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_minio|default(false)|bool
+    - ansible_nodes_template.stat.exists
+    - kdevops_baseline_and_dev
+    - not minio_enable_multifs_testing|default(false)|bool
+
 - name: Get the control host's timezone
   ansible.builtin.command: "timedatectl show -p Timezone --value"
   register: kdevops_host_timezone
diff --git a/playbooks/roles/minio_destroy/tasks/main.yml b/playbooks/roles/minio_destroy/tasks/main.yml
new file mode 100644
index 00000000..078cb13f
--- /dev/null
+++ b/playbooks/roles/minio_destroy/tasks/main.yml
@@ -0,0 +1,34 @@
+---
+- name: Import optional extra_args file
+  include_vars: "{{ item }}"
+  ignore_errors: yes
+  with_items:
+    - "../extra_vars.yaml"
+  tags: vars
+
+- name: Stop and remove MinIO container
+  community.docker.docker_container:
+    name: "{{ minio_container_name }}"
+    state: absent
+  ignore_errors: yes
+
+- name: Remove Docker network
+  community.docker.docker_network:
+    name: "{{ minio_docker_network_name }}"
+    state: absent
+  ignore_errors: yes
+
+- name: Clean up MinIO data directory
+  file:
+    path: "{{ minio_data_path }}"
+    state: absent
+  when: minio_warp_enable_cleanup | default(true) | bool
+
+- name: Clean up temporary Warp results
+  file:
+    path: "/tmp/warp-results"
+    state: absent
+
+- name: Display MinIO destroy complete
+  debug:
+    msg: "MinIO containers and data have been cleaned up"
diff --git a/playbooks/roles/minio_install/tasks/main.yml b/playbooks/roles/minio_install/tasks/main.yml
new file mode 100644
index 00000000..9ea3d758
--- /dev/null
+++ b/playbooks/roles/minio_install/tasks/main.yml
@@ -0,0 +1,61 @@
+---
+- name: Import optional extra_args file
+  include_vars: "{{ item }}"
+  ignore_errors: yes
+  with_items:
+    - "../extra_vars.yaml"
+  tags: vars
+
+- name: Install Docker
+  package:
+    name:
+      - docker.io
+      - python3-docker
+    state: present
+  become: yes
+
+- name: Ensure Docker service is running
+  systemd:
+    name: docker
+    state: started
+    enabled: yes
+  become: yes
+
+- name: Add current user to docker group
+  user:
+    name: "{{ ansible_user | default('kdevops') }}"
+    groups: docker
+    append: yes
+  become: yes
+
+- name: Install MinIO Warp
+  block:
+    - name: Download MinIO Warp binary
+      get_url:
+        url: "https://github.com/minio/warp/releases/latest/download/warp_Linux_x86_64.tar.gz"
+        dest: "/tmp/warp_Linux_x86_64.tar.gz"
+        mode: '0644'
+
+    - name: Extract MinIO Warp
+      unarchive:
+        src: "/tmp/warp_Linux_x86_64.tar.gz"
+        dest: "/tmp"
+        remote_src: yes
+
+    - name: Install Warp binary
+      copy:
+        src: "/tmp/warp"
+        dest: "/usr/local/bin/warp"
+        mode: '0755'
+        owner: root
+        group: root
+        remote_src: yes
+      become: yes
+
+    - name: Clean up downloaded files
+      file:
+        path: "{{ item }}"
+        state: absent
+      loop:
+        - "/tmp/warp_Linux_x86_64.tar.gz"
+        - "/tmp/warp"
diff --git a/playbooks/roles/minio_results/tasks/main.yml b/playbooks/roles/minio_results/tasks/main.yml
new file mode 100644
index 00000000..74038552
--- /dev/null
+++ b/playbooks/roles/minio_results/tasks/main.yml
@@ -0,0 +1,86 @@
+---
+- name: Import optional extra_args file
+  include_vars: "{{ item }}"
+  ignore_errors: yes
+  with_items:
+    - "../extra_vars.yaml"
+  tags: vars
+
+- name: Create results analysis script
+  copy:
+    content: |
+      #!/usr/bin/env python3
+      import json
+      import glob
+      import os
+      import sys
+      from pathlib import Path
+
+      def analyze_warp_results():
+          results_dir = Path("{{ playbook_dir }}/../workflows/minio/results")
+          result_files = list(results_dir.glob("warp_benchmark_*.json"))
+
+          if not result_files:
+              print("No Warp benchmark results found.")
+              return
+
+          print("MinIO Warp Benchmark Results Summary")
+          print("=" * 50)
+
+          total_throughput = 0
+          total_requests = 0
+          for result_file in result_files:
+              try:
+                  with open(result_file, 'r') as f:
+                      data = json.load(f)
+
+                  hostname = result_file.name.split('_')[2]
+                  timestamp = result_file.name.split('_')[3].replace('.json', '')
+
+                  # Extract key metrics from Warp results
+                  if 'throughput' in data:
+                      throughput_mbps = data['throughput'].get('average', 0) / (1024 * 1024)
+                      total_throughput += throughput_mbps
+
+                  if 'requests' in data:
+                      req_per_sec = data['requests'].get('average', 0)
+                      total_requests += req_per_sec
+
+                  print(f"\nHost: {hostname}")
+                  print(f"Timestamp: {timestamp}")
+                  print(f"Throughput: {throughput_mbps:.2f} MB/s")
+                  print(f"Requests/sec: {req_per_sec:.2f}")
+
+                  if 'latency' in data:
+                      avg_latency = data['latency'].get('average', 0)
+                      print(f"Average Latency: {avg_latency:.2f} ms")
+
+              except Exception as e:
+                  print(f"Error processing {result_file}: {e}")
+          print("\n" + "=" * 50)
+          print(f"Total Throughput: {total_throughput:.2f} MB/s")
+          print(f"Total Requests/sec: {total_requests:.2f}")
+
+      if __name__ == "__main__":
+          analyze_warp_results()
+    dest: "/tmp/analyze_minio_results.py"
+    mode: '0755'
+  delegate_to: localhost
+  run_once: true
+
+- name: Run results analysis
+  command: python3 /tmp/analyze_minio_results.py
+  register: analysis_output
+  delegate_to: localhost
+  run_once: true
+
+- name: Display analysis results
+  debug:
+    var: analysis_output.stdout_lines
+
+- name: Create results summary file
+  copy:
+    content: "{{ analysis_output.stdout }}"
+    dest: "{{ playbook_dir }}/../workflows/minio/results/benchmark_summary.txt"
+  delegate_to: localhost
+  run_once: true
diff --git a/playbooks/roles/minio_setup/defaults/main.yml b/playbooks/roles/minio_setup/defaults/main.yml
new file mode 100644
index 00000000..14030103
--- /dev/null
+++ b/playbooks/roles/minio_setup/defaults/main.yml
@@ -0,0 +1,16 @@
+---
+# MinIO Docker container defaults
+minio_container_image: "minio/minio:RELEASE.2023-03-20T20-16-18Z"
+minio_container_name: "minio-server"
+minio_api_port: 9000
+minio_console_port: 9001
+minio_access_key: "minioadmin"
+minio_secret_key: "minioadmin"
+minio_data_path: "/data/minio"
+minio_memory_limit: "2g"
+minio_docker_network: "minio-network"
+
+# MinIO service configuration
+minio_enable: true
+minio_create_network: true
+minio_wait_for_ready: true
diff --git a/playbooks/roles/minio_setup/tasks/main.yml b/playbooks/roles/minio_setup/tasks/main.yml
new file mode 100644
index 00000000..db7e3d6d
--- /dev/null
+++ b/playbooks/roles/minio_setup/tasks/main.yml
@@ -0,0 +1,100 @@
+---
+- name: Import optional extra_args file
+  include_vars: "{{ item }}"
+  ignore_errors: yes
+  with_items:
+    - "../extra_vars.yaml"
+  tags: vars
+
+- name: Setup dedicated MinIO storage filesystem if configured
+  when:
+    - minio_storage_enable | default(false) | bool
+    - minio_device is defined
+  block:
+    - name: Prepare filesystem mkfs options
+      set_fact:
+        minio_mkfs_opts: >-
+          {%- if minio_fstype == "xfs" -%}
+            -L miniostorage -f -b size={{ minio_xfs_blocksize | default(4096) }} -s size={{ minio_xfs_sectorsize | default(4096) }} {{ minio_xfs_mkfs_opts | default('') }}
+          {%- elif minio_fstype == "btrfs" -%}
+            -L miniostorage {{ minio_btrfs_mkfs_opts | default('-f') }}
+          {%- elif minio_fstype == "ext4" -%}
+            -L miniostorage {{ minio_ext4_mkfs_opts | default('-F') }}
+          {%- elif minio_fstype == "bcachefs" -%}
+            --label=miniostorage {{ minio_bcachefs_mkfs_opts | default('-f') }}
+          {%- else -%}
+            -L miniostorage -f
+          {%- endif -%}
+
+    - name: Create MinIO storage filesystem
+      include_role:
+        name: create_partition
+      vars:
+        disk_setup_device: "{{ minio_device }}"
+        disk_setup_fstype: "{{ minio_fstype | default('xfs') }}"
+        disk_setup_label: "miniostorage"
+        disk_setup_fs_opts: "{{ minio_mkfs_opts }}"
+        disk_setup_path: "{{ minio_mount_point | default('/data/minio') }}"
+        disk_setup_user: "root"
+        disk_setup_group: "root"
+
+- name: Create MinIO data directory
+  file:
+    path: "{{ minio_data_path | default('/data/minio') }}"
+    state: directory
+    mode: '0755'
+  when:
+    - not (minio_storage_enable | default(false) | bool)
+  become: true
+
+- name: Check filesystem type for MinIO data path
+  shell: df -T "{{ minio_data_path }}" | tail -1 | awk '{print $2}'
+  register: minio_fs_type
+  changed_when: false
+
+- name: Get filesystem details
+  shell: |
+    df -h "{{ minio_data_path }}" | tail -1
+  register: minio_fs_details
+  changed_when: false
+
+- name: Display filesystem information
+  debug:
+    msg: |
+      MinIO Storage Configuration:
+        Data Path: {{ minio_data_path }}
+        Filesystem: {{ minio_fs_type.stdout }}
+        Storage Details: {{ minio_fs_details.stdout }}
+
+- name: Create Docker network for MinIO
+  community.docker.docker_network:
+    name: "{{ minio_docker_network }}"
+    state: present
+  when: minio_enable | bool and minio_create_network | bool
+
+- name: Start MinIO container
+  community.docker.docker_container:
+    name: "{{ minio_container_name }}"
+    image: "{{ minio_container_image }}"
+    state: started
+    restart_policy: unless-stopped
+    networks:
+      - name: "{{ minio_docker_network }}"
+    ports:
+      - "{{ minio_api_port }}:9000"
+      - "{{ minio_console_port }}:9001"
+    env:
+      MINIO_ACCESS_KEY: "{{ minio_access_key }}"
+      MINIO_SECRET_KEY: "{{ minio_secret_key }}"
+    command: server /minio_data --console-address ":9001"
+    volumes:
+      - "{{ minio_data_path }}:/minio_data"
+    memory: "{{ minio_memory_limit }}"
+  when: minio_enable | bool
+
+- name: Wait for MinIO to be ready
+  wait_for:
+    host: localhost
+    port: "{{ minio_api_port }}"
+    timeout: 60
+  when: minio_enable | bool and minio_wait_for_ready | bool
diff --git a/playbooks/roles/minio_uninstall/tasks/main.yml b/playbooks/roles/minio_uninstall/tasks/main.yml
new file mode 100644
index 00000000..bea15439
--- /dev/null
+++ b/playbooks/roles/minio_uninstall/tasks/main.yml
@@ -0,0 +1,17 @@
+---
+- name: Import optional extra_args file
+  include_vars: "{{ item }}"
+  ignore_errors: yes
+  with_items:
+    - "../extra_vars.yaml"
+  tags: vars
+
+- name: Stop MinIO container
+  community.docker.docker_container:
+    name: "{{ minio_container_name }}"
+    state: stopped
+  ignore_errors: yes
+
+- name: Display MinIO uninstallation complete
+  debug:
+    msg: "MinIO container stopped"
diff --git a/playbooks/roles/minio_warp_run/tasks/main.yml b/playbooks/roles/minio_warp_run/tasks/main.yml
new file mode 100644
index 00000000..2c073024
--- /dev/null
+++ b/playbooks/roles/minio_warp_run/tasks/main.yml
@@ -0,0 +1,249 @@
+---
+- name: Import optional extra_args file
+  include_vars: "{{ item }}"
+  ignore_errors: yes
+  with_items:
+    - "../extra_vars.yaml"
+  tags: vars
+
+- name: Create Warp results directory on remote host
+  file:
+    path: "/tmp/warp-results"
+    state: directory
+    mode: '0755'
+
+- name: Ensure local results directory exists with proper permissions
+  block:
+    - name: Create local results directory
+      file:
+        path: "{{ playbook_dir }}/../workflows/minio/results"
+        state: directory
+        mode: '0755'
+      delegate_to: localhost
+      run_once: true
+      become: no
+  rescue:
+    - name: Fix results directory permissions if needed
+      file:
+        path: "{{ playbook_dir }}/../workflows/minio/results"
+        state: directory
+        mode: '0755'
+        owner: "{{ lookup('env', 'USER') }}"
+        group: "{{ lookup('env', 'USER') }}"
+      delegate_to: localhost
+      run_once: true
+      become: yes
+
+
+- name: Wait for MinIO to be fully ready
+  wait_for:
+    host: localhost
+    port: "{{ minio_api_port }}"
+    timeout: 120
+  retries: 3
+  delay: 10
+
+- name: Check if Warp is installed
+  command: which warp
+  register: warp_check
+  failed_when: false
+  changed_when: false
+
+- name: Verify Warp installation
+  fail:
+    msg: "MinIO Warp is not installed. Please run 'make minio-install' first."
+  when: warp_check.rc != 0
+
+- name: Create Warp configuration file
+  template:
+    src: warp_config.json.j2
+    dest: "/tmp/warp_config.json"
+    mode: '0644'
+
+- name: Set MinIO endpoint URL
+  set_fact:
+    minio_endpoint: "localhost:{{ minio_api_port }}"
+
+- name: Display Warp version
+  command: warp --version
+  register: warp_version
+  changed_when: false
+
+- name: Show Warp version
+  debug:
+    msg: "MinIO Warp version: {{ warp_version.stdout }}"
+
+- name: Calculate benchmark timeout
+  set_fact:
+    # Parse duration and add 10 minutes buffer
+    benchmark_timeout: >-
+      {%- set duration_str = minio_warp_duration | string -%}
+      {%- if 's' in duration_str -%}
+        {{ (duration_str | replace('s','') | int) + 600 }}
+      {%- elif 'm' in duration_str -%}
+        {{ (duration_str | replace('m','') | int * 60) + 600 }}
+      {%- elif 'h' in duration_str -%}
+        {{ (duration_str | replace('h','') | int * 3600) + 600 }}
+      {%- else -%}
+        {{ 2400 }}
+      {%- endif -%}
+
+- name: Copy comprehensive benchmark script
+  copy:
+    src: "{{ playbook_dir }}/../workflows/minio/scripts/run_benchmark_suite.sh"
+    dest: "/tmp/run_benchmark_suite.sh"
+    mode: '0755'
+  when: minio_warp_run_comprehensive_suite | default(false)
+
+- name: Display benchmark configuration
+  debug:
+    msg: |
+      Comprehensive suite: {{ minio_warp_run_comprehensive_suite | default(false) }}
+      Duration: {{ minio_warp_duration }}
+      Timeout: {{ benchmark_timeout }} seconds
+  when: minio_warp_run_comprehensive_suite | default(false)
+
+- name: Run comprehensive benchmark suite
+  shell: |
+    set -x  # Enable debug output
+    echo "Starting comprehensive benchmark suite"
+    echo "Duration parameter: {{ minio_warp_duration }}"
+    /tmp/run_benchmark_suite.sh \
+      "{{ minio_endpoint }}" \
+      "{{ minio_access_key }}" \
+      "{{ minio_secret_key }}" \
+      "{{ minio_warp_duration }}"
+    EXIT_CODE=$?
+    echo "Benchmark suite completed with exit code: $EXIT_CODE"
+    exit $EXIT_CODE
+  args:
+    executable: /bin/bash
+  register: suite_output
+  when: minio_warp_run_comprehensive_suite | default(false)
+  async: "{{ benchmark_timeout | default(3600) | int }}"  # Use calculated timeout or 1 hour default
+  poll: 30
+
+- name: Display comprehensive suite output
+  debug:
+    msg: |
+      Suite completed: {{ suite_output is defined }}
+      Exit code: {{ suite_output.rc | default('N/A') }}
+      Output: {{ suite_output.stdout | default('No output') | truncate(500) }}
+  when: minio_warp_run_comprehensive_suite | default(false)
+
+- name: Debug - Show which path we're taking
+  debug:
+    msg: |
+      Comprehensive suite enabled: {{ minio_warp_run_comprehensive_suite | default(false) }}
+      Duration: {{ minio_warp_duration }}
+      Benchmark timeout: {{ benchmark_timeout }} seconds
+
+- name: Set timestamp for consistent filename
+  set_fact:
+    warp_timestamp: "{{ ansible_date_time.epoch }}"
+  when: not (minio_warp_run_comprehensive_suite | default(false))
+
+- name: Run MinIO Warp single benchmark with JSON output
+  shell: |
+    echo "=== Starting single benchmark ==="
+    echo "Duration: {{ minio_warp_duration }}"
+    echo "Full command:"
+    OUTPUT_FILE="/tmp/warp-results/warp_benchmark_{{ ansible_hostname }}_{{ warp_timestamp }}.json"
+
+    # Show the actual command being run
+    set -x
+    # IMPORTANT: --autoterm with --objects makes warp stop after N objects, ignoring --duration!
+    # For duration-based tests, do not use --autoterm
+    time warp {{ minio_warp_benchmark_type }} \
+      --host="{{ minio_endpoint }}" \
+      --access-key="{{ minio_access_key }}" \
+      --secret-key="{{ minio_secret_key }}" \
+      --bucket="{{ minio_warp_bucket_name }}" \
+      --duration="{{ minio_warp_duration }}" \
+      --concurrent="{{ minio_warp_concurrent_requests }}" \
+      --obj.size="{{ minio_warp_object_size }}" \
+      {% if minio_warp_enable_web_ui|default(false) %}--warp-client="{{ ansible_default_ipv4.address }}:{{ minio_warp_web_ui_port|default(7762) }}"{% endif %} \
+      --noclear \
+      --json > "$OUTPUT_FILE" 2>&1
+    RESULT=$?
+    set +x
+
+    echo "=== Benchmark completed with exit code: $RESULT ==="
+    echo "=== Output file size: $(ls -lh $OUTPUT_FILE 2>/dev/null | awk '{print \$5}') ==="
+
+    # Check if file was created
+    if [ -f "$OUTPUT_FILE" ]; then
+      echo "Results saved to: $OUTPUT_FILE"
+      ls -la "$OUTPUT_FILE"
+    else
+      echo "Warning: Results file not created"
+    fi
+    exit $RESULT
+  args:
+    executable: /bin/bash
+  environment:
+    WARP_ACCESS_KEY: "{{ minio_access_key }}"
+    WARP_SECRET_KEY: "{{ minio_secret_key }}"
+  register: warp_output
+  async: "{{ benchmark_timeout | int }}"
+  poll: 30
+  when: not (minio_warp_run_comprehensive_suite | default(false))
+
+- name: Display benchmark completion
+  debug:
+    msg: "MinIO Warp benchmark completed on {{ ansible_hostname }}"
+  when: (warp_output is defined and warp_output.rc | default(1) == 0) or (suite_output is defined and suite_output.rc | default(1) == 0)
+
+- name: Check if results file exists
+  stat:
+    path: "/tmp/warp-results/warp_benchmark_{{ ansible_hostname }}_{{ warp_timestamp }}.json"
+  register: results_file
+  when: warp_timestamp is defined
+
+- name: Display results file status
+  debug:
+    msg: "Results file exists: {{ results_file.stat.exists }}, Size: {{ results_file.stat.size | default(0) }} bytes"
+  when: results_file is defined and not results_file.skipped | default(false)
+
+- name: Copy results to local system
+  fetch:
+    src: "/tmp/warp-results/warp_benchmark_{{ ansible_hostname }}_{{ warp_timestamp }}.json"
+    dest: "{{ playbook_dir }}/../workflows/minio/results/"
+    flat: yes
+  become: no
+  when: results_file is defined and not results_file.skipped | default(false) and results_file.stat.exists | default(false)
+
+- name: Generate graphs and HTML report
+  command: "python3 {{ playbook_dir }}/../workflows/minio/scripts/generate_warp_report.py {{ playbook_dir }}/../workflows/minio/results/"
+  delegate_to: localhost
+  run_once: true
+  become: no
+  when: results_file is defined and not results_file.skipped | default(false) and results_file.stat.exists | default(false)
+  ignore_errors: yes
+
+- name: Save benchmark output as fallback
+  copy:
+    content: |
+      MinIO Warp Benchmark Results
+      ============================
+      Host: {{ ansible_hostname }}
+      Timestamp: {{ warp_timestamp | default('unknown') }}
+
+      Debug Output:
+      {{ warp_debug.stdout | default('No debug output') }}
+      {{ warp_debug.stderr | default('') }}
+
+      Full Benchmark Output:
+      {{ warp_output.stdout | default('No benchmark output - debug run failed') }}
+
+      Error Output (if any):
+      {{ warp_output.stderr | default('No errors') }}
+    dest: "/tmp/warp-results/warp_fallback_{{ ansible_hostname }}_{{ warp_timestamp | default(ansible_date_time.epoch) }}.txt"
+  when: warp_debug is defined
+
+- name: Copy fallback results
+  fetch:
+    src: "/tmp/warp-results/warp_fallback_{{ ansible_hostname }}_{{ warp_timestamp | default(ansible_date_time.epoch) }}.txt"
+    dest: "{{ playbook_dir }}/../workflows/minio/results/"
+    flat: yes
+  when: warp_debug is defined and not (results_file is defined and not results_file.skipped | default(false) and results_file.stat.exists | default(false))
diff --git a/playbooks/roles/minio_warp_run/templates/warp_config.json.j2 b/playbooks/roles/minio_warp_run/templates/warp_config.json.j2
new file mode 100644
index 00000000..d5c4dc8b
--- /dev/null
+++ b/playbooks/roles/minio_warp_run/templates/warp_config.json.j2
@@ -0,0 +1,14 @@
+{
+  "benchmark": "{{ minio_warp_benchmark_type }}",
+  "endpoint": "http://localhost:{{ minio_api_port }}",
+  "access_key": "{{ minio_access_key }}",
+  "secret_key": "{{ minio_secret_key }}",
+  "bucket": "{{ minio_warp_bucket_name }}",
+  "duration": "{{ minio_warp_duration }}",
+  "concurrent": {{ minio_warp_concurrent_requests }},
+  "object_size": "{{ minio_warp_object_size }}",
+  "objects": {{ minio_warp_objects_per_request }},
+  "auto_terminate": {{ minio_warp_auto_terminate | lower }},
+  "cleanup": {{ minio_warp_enable_cleanup | lower }},
+  "output_format": "{{ minio_warp_output_format }}"
+}
diff --git a/workflows/Makefile b/workflows/Makefile
index fe35707b..ee90227e 100644
--- a/workflows/Makefile
+++ b/workflows/Makefile
@@ -70,6 +70,10 @@ ifeq (y,$(CONFIG_KDEVOPS_WORKFLOW_ENABLE_AI))
 include workflows/ai/Makefile
 endif # CONFIG_KDEVOPS_WORKFLOW_ENABLE_AI == y
 
+ifeq (y,$(CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO))
+include workflows/minio/Makefile
+endif # CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO == y
+
 ANSIBLE_EXTRA_ARGS += $(WORKFLOW_ARGS)
 ANSIBLE_EXTRA_ARGS_SEPARATED += $(WORKFLOW_ARGS_SEPARATED)
 ANSIBLE_EXTRA_ARGS_DIRECT += $(WORKFLOW_ARGS_DIRECT)
diff --git a/workflows/minio/Kconfig b/workflows/minio/Kconfig
new file mode 100644
index 00000000..2af12fc9
--- /dev/null
+++ b/workflows/minio/Kconfig
@@ -0,0 +1,23 @@
+if KDEVOPS_WORKFLOW_ENABLE_MINIO
+
+menu "MinIO S3 Storage Testing"
+
+config KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP
+	bool "Enable MinIO Warp benchmarking"
+	default y
+	help
+	Enable MinIO Warp for S3 storage benchmarking. Warp provides
+	comprehensive S3 API performance testing with multiple benchmark
+	types including GET, PUT, DELETE, LIST, and MULTIPART operations.
+
+if KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP
+
+source "workflows/minio/Kconfig.docker"
+source "workflows/minio/Kconfig.storage"
+source "workflows/minio/Kconfig.warp"
+
+endif # KDEVOPS_WORKFLOW_ENABLE_MINIO_WARP
+
+endmenu
+
+endif # KDEVOPS_WORKFLOW_ENABLE_MINIO
diff --git a/workflows/minio/Kconfig.docker b/workflows/minio/Kconfig.docker
new file mode 100644
index 00000000..3a337194
--- /dev/null
+++ b/workflows/minio/Kconfig.docker
@@ -0,0 +1,66 @@
+config MINIO_CONTAINER_IMAGE_STRING
+	string "MinIO container image"
+	output yaml
+	default "minio/minio:RELEASE.2024-01-16T16-07-38Z"
+	help
+	The MinIO container image to use for S3 storage benchmarking.
+	Using a recent stable release with performance improvements.
+
+config MINIO_CONTAINER_NAME
+	string "The local MinIO container name"
+	default "minio-warp-server"
+	output yaml
+	help
+	Set the name for the MinIO Docker container.
+
+config MINIO_ACCESS_KEY
+	string "MinIO access key"
+	output yaml
+	default "minioadmin"
+	help
+	Access key for MinIO S3 API access.
+
+config MINIO_SECRET_KEY
+	string "MinIO secret key"
+	output yaml
+	default "minioadmin"
+	help
+	Secret key for MinIO S3 API access.
+
+config MINIO_DATA_PATH
+	string "Host path for MinIO data storage"
+	output yaml
+	default "/data/minio"
+	help
+	Directory on the host where MinIO data will be persisted.
+	If using dedicated storage, this will be the mount point.
+	Otherwise, uses the existing filesystem at this path.
+
+config MINIO_DOCKER_NETWORK_NAME
+	string "Docker network name"
+	output yaml
+	default "minio-warp-network"
+	help
+	Name of the Docker network to create for MinIO containers.
+
+config MINIO_API_PORT
+	int "MinIO API port"
+	output yaml
+	default "9000"
+	help
+	Port for MinIO S3 API access.
+
+config MINIO_CONSOLE_PORT
+	int "MinIO console port"
+	output yaml
+	default "9001"
+	help
+	Port for MinIO web console access.
+
+config MINIO_MEMORY_LIMIT
+	string "MinIO container memory limit"
+	output yaml
+	default "4g"
+	help
+	Memory limit for the MinIO container. Adjust based on
+	your system resources and workload requirements.
diff --git a/workflows/minio/Kconfig.storage b/workflows/minio/Kconfig.storage
new file mode 100644
index 00000000..88159123
--- /dev/null
+++ b/workflows/minio/Kconfig.storage
@@ -0,0 +1,364 @@
+menu "MinIO Storage Configuration"
+
+# CLI override support for WARP_DEVICE
+config MINIO_DEVICE_SET_BY_CLI
+	bool
+	output yaml
+	default $(shell, scripts/check-cli-set-var.sh WARP_DEVICE)
+
+config MINIO_STORAGE_ENABLE
+	bool "Enable dedicated MinIO storage device"
+	default y
+	output yaml
+	help
+	  Configure a dedicated storage device for MinIO data storage.
+	  This allows testing MinIO performance on different filesystems
+	  and configurations by creating and mounting a dedicated partition.
+
+	  When enabled, MinIO data will be stored on a dedicated device
+	  and filesystem optimized for S3 workloads.
+
+if MINIO_STORAGE_ENABLE
+
+config MINIO_DEVICE
+	string "Device to use for MinIO storage"
+	output yaml
+	default $(shell, ./scripts/append-makefile-vars.sh $(WARP_DEVICE)) if MINIO_DEVICE_SET_BY_CLI
+	default "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
+	default "/dev/disk/by-id/virtio-kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
+	default "/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
+	default "/dev/nvme2n1" if TERRAFORM_AWS_INSTANCE_M5AD_2XLARGE
+	default "/dev/nvme2n1" if TERRAFORM_AWS_INSTANCE_M5AD_4XLARGE
+	default "/dev/nvme1n1" if TERRAFORM_GCE
+	default "/dev/sdd" if TERRAFORM_AZURE
+	default TERRAFORM_OCI_SPARSE_VOLUME_DEVICE_FILE_NAME if TERRAFORM_OCI
+	help
+	  The device to use for MinIO storage. This device will be
+	  formatted and mounted to store MinIO S3 data.
+
+	  Can be overridden with WARP_DEVICE environment variable:
+	    make defconfig-minio-warp-xfs-16k WARP_DEVICE=/dev/nvme4n1
+
+config MINIO_MOUNT_POINT
+	string "Mount point for MinIO storage"
+	output yaml
+	default "/data/minio"
+	help
+	  The path where the MinIO storage filesystem will be mounted.
+	  MinIO will store all S3 data under this path.
+
+choice
+	prompt "MinIO storage filesystem"
+	default MINIO_FSTYPE_XFS
+
+config MINIO_FSTYPE_XFS
+	bool "XFS"
+	help
+	  Use XFS filesystem for MinIO storage. XFS provides excellent
+	  performance for large files and is recommended for production
+	  MinIO deployments. Supports various block sizes for testing
+	  large block size (LBS) configurations.
+
+config MINIO_FSTYPE_BTRFS
+	bool "Btrfs"
+	help
+	  Use Btrfs filesystem for MinIO storage. Btrfs provides
+	  advanced features like snapshots and compression, which can
+	  be beneficial for S3 storage management.
+
+config MINIO_FSTYPE_EXT4
+	bool "ext4"
+	help
+	  Use ext4 filesystem for MinIO storage. Ext4 is a mature
+	  and reliable filesystem with good all-around performance.
+
+config MINIO_FSTYPE_BCACHEFS
+	bool "bcachefs"
+	help
+	  Use bcachefs filesystem for MinIO storage. Bcachefs is a
+	  modern filesystem with advanced features like compression,
+	  encryption, and caching.
+
+endchoice
+
+config MINIO_FSTYPE
+	string
+	output yaml
+	default "xfs" if MINIO_FSTYPE_XFS
+	default "btrfs" if MINIO_FSTYPE_BTRFS
+	default "ext4" if MINIO_FSTYPE_EXT4
+	default "bcachefs" if MINIO_FSTYPE_BCACHEFS
+
+if MINIO_FSTYPE_XFS
+
+choice
+	prompt "XFS block size configuration"
+	default MINIO_XFS_BLOCKSIZE_4K
+
+config MINIO_XFS_BLOCKSIZE_4K
+	bool "4K block size (default)"
+	help
+	  Use 4K (4096 bytes) block size. This is the default and most
+	  compatible configuration.
+
+config MINIO_XFS_BLOCKSIZE_8K
+	bool "8K block size"
+	help
+	  Use 8K (8192 bytes) block size for improved performance with
+	  larger I/O operations.
+
+config MINIO_XFS_BLOCKSIZE_16K
+	bool "16K block size (LBS)"
+	help
+	  Use 16K (16384 bytes) block size. This is a large block size
+	  configuration that may require kernel LBS support.
+
+config MINIO_XFS_BLOCKSIZE_32K
+	bool "32K block size (LBS)"
+	help
+	  Use 32K (32768 bytes) block size. This is a large block size
+	  configuration that requires kernel LBS support.
+
+config MINIO_XFS_BLOCKSIZE_64K
+	bool "64K block size (LBS)"
+	help
+	  Use 64K (65536 bytes) block size. This is the maximum XFS block
+	  size and requires kernel LBS support.
+
+endchoice
+
+config MINIO_XFS_BLOCKSIZE
+	int
+	output yaml
+	default 4096 if MINIO_XFS_BLOCKSIZE_4K
+	default 8192 if MINIO_XFS_BLOCKSIZE_8K
+	default 16384 if MINIO_XFS_BLOCKSIZE_16K
+	default 32768 if MINIO_XFS_BLOCKSIZE_32K
+	default 65536 if MINIO_XFS_BLOCKSIZE_64K
+
+choice
+	prompt "XFS sector size"
+	default MINIO_XFS_SECTORSIZE_4K
+
+config MINIO_XFS_SECTORSIZE_4K
+	bool "4K sector size (default)"
+	help
+	  Use 4K (4096 bytes) sector size. This is the standard
+	  configuration for most modern drives.
+
+config MINIO_XFS_SECTORSIZE_512
+	bool "512 byte sector size"
+	depends on MINIO_XFS_BLOCKSIZE_4K
+	help
+	  Use legacy 512 byte sector size. Only available with 4K block size.
+
+config MINIO_XFS_SECTORSIZE_8K
+	bool "8K sector size"
+	depends on MINIO_XFS_BLOCKSIZE_8K || MINIO_XFS_BLOCKSIZE_16K || MINIO_XFS_BLOCKSIZE_32K || MINIO_XFS_BLOCKSIZE_64K
+	help
+	  Use 8K (8192 bytes) sector size. Requires block size >= 8K.
+
+config MINIO_XFS_SECTORSIZE_16K
+	bool "16K sector size (LBS)"
+	depends on MINIO_XFS_BLOCKSIZE_16K || MINIO_XFS_BLOCKSIZE_32K || MINIO_XFS_BLOCKSIZE_64K
+	help
+	  Use 16K (16384 bytes) sector size. Requires block size >= 16K
+	  and kernel LBS support.
+
+config MINIO_XFS_SECTORSIZE_32K
+	bool "32K sector size (LBS)"
+	depends on MINIO_XFS_BLOCKSIZE_32K || MINIO_XFS_BLOCKSIZE_64K
+	help
+	  Use 32K (32768 bytes) sector size. Requires block size >= 32K
+	  and kernel LBS support.
+
+endchoice
+
+config MINIO_XFS_SECTORSIZE
+	int
+	output yaml
+	default 512 if MINIO_XFS_SECTORSIZE_512
+	default 4096 if MINIO_XFS_SECTORSIZE_4K
+	default 8192 if MINIO_XFS_SECTORSIZE_8K
+	default 16384 if MINIO_XFS_SECTORSIZE_16K
+	default 32768 if MINIO_XFS_SECTORSIZE_32K
+
+config MINIO_XFS_MKFS_OPTS
+	string "Additional XFS mkfs options for MinIO storage"
+	output yaml
+	default ""
+	help
+	  Additional options to pass to mkfs.xfs when creating the MinIO
+	  storage filesystem. Block and sector sizes are configured above.
+
+endif # MINIO_FSTYPE_XFS
+
+config MINIO_BTRFS_MKFS_OPTS
+	string "Btrfs mkfs options for MinIO storage"
+	output yaml
+	default "-f"
+	depends on MINIO_FSTYPE_BTRFS
+	help
+	  Options to pass to mkfs.btrfs when creating the MinIO storage
+	  filesystem.
+
+config MINIO_EXT4_MKFS_OPTS
+	string "ext4 mkfs options for MinIO storage"
+	output yaml
+	default "-F"
+	depends on MINIO_FSTYPE_EXT4
+	help
+	  Options to pass to mkfs.ext4 when creating the MinIO storage
+	  filesystem.
+
+config MINIO_BCACHEFS_MKFS_OPTS
+	string "bcachefs mkfs options for MinIO storage"
+	output yaml
+	default "-f"
+	depends on MINIO_FSTYPE_BCACHEFS
+	help
+	  Options to pass to mkfs.bcachefs when creating the MinIO storage
+	  filesystem.
+
+endif # MINIO_STORAGE_ENABLE
+
+# Multi-filesystem configuration when not skipping bringup
+if !KDEVOPS_USE_DECLARED_HOSTS && MINIO_STORAGE_ENABLE
+
+config MINIO_ENABLE_MULTIFS_TESTING
+	bool "Enable multi-filesystem testing"
+	default n
+	output yaml
+	help
+	  Enable testing the same MinIO workload across multiple filesystem
+	  configurations. This allows comparing S3 performance characteristics
+	  between different filesystems and their configurations.
+
+	  When enabled, multiple nodes will be created with different
+	  filesystem configurations for comprehensive performance analysis.
+
+if MINIO_ENABLE_MULTIFS_TESTING
+
+config MINIO_MULTIFS_TEST_XFS
+	bool "Test XFS configurations"
+	default y
+	output yaml
+	help
+	  Enable testing MinIO workloads on XFS filesystem with different
+	  block size configurations.
+
+if MINIO_MULTIFS_TEST_XFS
+
+menu "XFS configuration profiles"
+
+config MINIO_MULTIFS_XFS_4K_4KS
+	bool "XFS 4k block size - 4k sector size"
+	default y
+	output yaml
+	help
+	  Test MinIO workloads on XFS with 4k filesystem block size
+	  and 4k sector size. This is the most common configuration
+	  and provides good performance for most S3 workloads.
+
+config MINIO_MULTIFS_XFS_16K_4KS
+	bool "XFS 16k block size - 4k sector size"
+	default n
+	output yaml
+	help
+	  Test MinIO workloads on XFS with 16k filesystem block size
+	  and 4k sector size. Larger block sizes can improve performance
+	  for large object storage patterns.
+
+config MINIO_MULTIFS_XFS_32K_4KS
+	bool "XFS 32k block size - 4k sector size"
+	default n
+	output yaml
+	help
+	  Test MinIO workloads on XFS with 32k filesystem block size
+	  and 4k sector size. Even larger block sizes can provide
+	  benefits for very large S3 objects.
+
+config MINIO_MULTIFS_XFS_64K_4KS
+	bool "XFS 64k block size - 4k sector size"
+	default n
+	output yaml
+	help
+	  Test MinIO workloads on XFS with 64k filesystem block size
+	  and 4k sector size. Maximum supported block size for XFS,
+	  optimized for very large object operations.
+
+endmenu
+
+endif # MINIO_MULTIFS_TEST_XFS
+
+config MINIO_MULTIFS_TEST_EXT4
+	bool "Test ext4 configurations"
+	default y
+	output yaml
+	help
+	  Enable testing MinIO workloads on ext4 filesystem with different
+	  configurations including bigalloc options.
+
+if MINIO_MULTIFS_TEST_EXT4
+
+menu "ext4 configuration profiles"
+
+config MINIO_MULTIFS_EXT4_4K
+	bool "ext4 4k block size"
+	default y
+	output yaml
+	help
+	  Test MinIO workloads on ext4 with standard 4k block size.
+	  This is the default ext4 configuration.
+
+config MINIO_MULTIFS_EXT4_16K_BIGALLOC
+	bool "ext4 16k bigalloc"
+	default n
+	output yaml
+	help
+	  Test MinIO workloads on ext4 with 16k bigalloc enabled.
+	  Bigalloc reduces metadata overhead and can improve
+	  performance for large S3 objects.
+
+endmenu
+
+endif # MINIO_MULTIFS_TEST_EXT4
+
+config MINIO_MULTIFS_TEST_BTRFS
+	bool "Test btrfs configurations"
+	default y
+	output yaml
+	help
+	  Enable testing MinIO workloads on btrfs filesystem with
+	  common default configuration profile.
+
+if MINIO_MULTIFS_TEST_BTRFS
+
+menu "btrfs configuration profiles"
+
+config MINIO_MULTIFS_BTRFS_DEFAULT
+	bool "btrfs default profile"
+	default y
+	output yaml
+	help
+	  Test MinIO workloads on btrfs with default configuration.
+	  This includes modern defaults with free-space-tree and
+	  no-holes features enabled.
+
+endmenu
+
+endif # MINIO_MULTIFS_TEST_BTRFS
+
+config MINIO_MULTIFS_RESULTS_DIR
+	string "Multi-filesystem results directory"
+	output yaml
+	default "/data/minio-multifs-benchmark"
+	help
+	  Directory where multi-filesystem test results and logs will be stored.
+	  Each filesystem configuration will have its own subdirectory.
+
+endif # MINIO_ENABLE_MULTIFS_TESTING
+
+endif # !SKIP_BRINGUP && MINIO_STORAGE_ENABLE
+
+endmenu
diff --git a/workflows/minio/Kconfig.warp b/workflows/minio/Kconfig.warp
new file mode 100644
index 00000000..6a8fdb9a
--- /dev/null
+++ b/workflows/minio/Kconfig.warp
@@ -0,0 +1,141 @@
+menu "MinIO Warp benchmark configuration"
+
+config MINIO_WARP_RUN_COMPREHENSIVE_SUITE
+	bool "Run comprehensive benchmark suite"
+	default y
+	output yaml
+	help
+	  Run a complete suite of benchmarks including mixed, GET, PUT, DELETE,
+	  LIST operations with various object sizes and concurrency levels.
+	  This provides the most thorough performance analysis.
+
+if !MINIO_WARP_RUN_COMPREHENSIVE_SUITE
+
+choice
+	prompt "Warp benchmark type"
+	default MINIO_WARP_BENCHMARK_MIXED
+	help
+	  Select the primary benchmark type for MinIO Warp testing.
+
+config MINIO_WARP_BENCHMARK_MIXED
+	bool "Mixed workload (GET/PUT/DELETE)"
+	help
+	  Run a mixed workload benchmark combining GET, PUT, and DELETE operations
+	  to simulate realistic S3 usage patterns.
+
+config MINIO_WARP_BENCHMARK_GET
+	bool "GET operations (download)"
+	help
+	  Focus on download performance testing with GET operations.
+
+config MINIO_WARP_BENCHMARK_PUT
+	bool "PUT operations (upload)"
+	help
+	  Focus on upload performance testing with PUT operations.
+
+config MINIO_WARP_BENCHMARK_DELETE
+	bool "DELETE operations"
+	help
+	  Test object deletion performance.
+
+config MINIO_WARP_BENCHMARK_LIST
+	bool "LIST operations"
+	help
+	  Test bucket and object listing performance.
+
+config MINIO_WARP_BENCHMARK_MULTIPART
+	bool "MULTIPART upload"
+	help
+	  Test large file upload performance using multipart uploads.
+
+endchoice
+
+endif # !MINIO_WARP_RUN_COMPREHENSIVE_SUITE
+
+config MINIO_WARP_BENCHMARK_TYPE
+	string "Benchmark type to run"
+	output yaml
+	default "mixed"
+
+config MINIO_WARP_DURATION
+	string "Benchmark duration"
+	output yaml
+	default "5m"
+	help
+	  Duration for each benchmark run. Examples: 30s, 5m, 1h.
+	  Longer durations provide more stable results but take more time.
+
+config MINIO_WARP_CONCURRENT_REQUESTS
+	int "Concurrent requests"
+	output yaml
+	default 10
+	range 1 1000
+	help
+	  Number of concurrent requests to send to MinIO.
+	  Higher values increase load but may overwhelm the system.
+
+config MINIO_WARP_OBJECT_SIZE
+	string "Object size for testing"
+	output yaml
+	default "1MB"
+	help
+	  Size of objects to use in benchmarks. Examples: 1KB, 1MB, 10MB.
+	  Larger objects test throughput, smaller objects test IOPS.
+
+config MINIO_WARP_OBJECTS_PER_REQUEST
+	int "Objects per request"
+	output yaml
+	default 100
+	range 1 10000
+	help
+	  Number of objects to use in the benchmark.
+	  More objects provide better statistical accuracy.
+
+config MINIO_WARP_BUCKET_NAME
+	string "S3 bucket name for testing"
+	output yaml
+	default "warp-benchmark-bucket"
+	help
+	  Name of the S3 bucket to create and use for benchmarking.
+
+config MINIO_WARP_AUTO_TERMINATE
+	bool "Auto-terminate when results stabilize"
+	output yaml
+	default y
+	help
+	  Automatically terminate the benchmark when performance results
+	  have stabilized, potentially reducing test time.
+
+config MINIO_WARP_ENABLE_CLEANUP
+	bool "Clean up test data after benchmarks"
+	output yaml
+	default y
+	help
+	  Remove test objects and buckets after benchmarking completes.
+	  Disable if you want to inspect test data afterwards.
+
+config MINIO_WARP_OUTPUT_FORMAT
+	string "Output format"
+	output yaml
+	default "json"
+	help
+	  Output format for benchmark results. Options: json, csv, text.
+	  JSON format provides the most detailed metrics for analysis.
+
+config MINIO_WARP_ENABLE_WEB_UI
+	bool "Enable Warp Web UI for real-time monitoring"
+	output yaml
+	default n
+	help
+	  Enable the Warp web interface for real-time benchmark monitoring.
+	  Access the UI at http://localhost:7762 during benchmark runs.
+
+config MINIO_WARP_WEB_UI_PORT
+	int "Web UI port"
+	output yaml
+	default 7762
+	depends on MINIO_WARP_ENABLE_WEB_UI
+	help
+	  Port for the Warp web interface.
+
+endmenu
diff --git a/workflows/minio/Makefile b/workflows/minio/Makefile
new file mode 100644
index 00000000..c543ed3b
--- /dev/null
+++ b/workflows/minio/Makefile
@@ -0,0 +1,76 @@
+MINIO_DATA_TARGET			:= minio
+MINIO_DATA_TARGET_INSTALL		:= minio-install
+MINIO_DATA_TARGET_UNINSTALL		:= minio-uninstall
+MINIO_DATA_TARGET_DESTROY		:= minio-destroy
+MINIO_DATA_TARGET_RUN			:= minio-warp
+MINIO_DATA_TARGET_RESULTS		:= minio-results
+
+MINIO_PLAYBOOK		:= playbooks/minio.yml
+
+HELP_TARGETS += minio-help
+
+$(MINIO_DATA_TARGET): $(ANSIBLE_INVENTORY_FILE)
+	$(Q)$(MAKE) $(MINIO_DATA_TARGET_INSTALL)
+
+$(MINIO_DATA_TARGET_INSTALL): $(ANSIBLE_INVENTORY_FILE)
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		-f 30 -i hosts $(MINIO_PLAYBOOK) \
+		--extra-vars=@$(KDEVOPS_EXTRA_VARS) \
+		--tags vars,minio_install
+
+$(MINIO_DATA_TARGET_UNINSTALL): $(ANSIBLE_INVENTORY_FILE)
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		-f 30 -i hosts $(MINIO_PLAYBOOK) \
+		--extra-vars=@$(KDEVOPS_EXTRA_VARS) \
+		--tags vars,minio_uninstall
+
+$(MINIO_DATA_TARGET_DESTROY): $(ANSIBLE_INVENTORY_FILE)
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		-f 30 -i hosts $(MINIO_PLAYBOOK) \
+		--extra-vars=@$(KDEVOPS_EXTRA_VARS) \
+		--tags vars,minio_destroy
+
+$(MINIO_DATA_TARGET_RUN): $(ANSIBLE_INVENTORY_FILE)
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		-f 30 -i hosts $(MINIO_PLAYBOOK) \
+		--extra-vars=@$(KDEVOPS_EXTRA_VARS) \
+		--tags vars,minio_warp
+
+$(MINIO_DATA_TARGET_RESULTS):
+	$(Q)if [ -d workflows/minio/results ]; then \
+		python3 workflows/minio/scripts/generate_warp_report.py workflows/minio/results/ && \
+		echo "" && \
+		echo "📊 MinIO Warp Analysis Complete!" && \
+		echo "Results available in workflows/minio/results/" && \
+		echo "  - warp_benchmark_report.html (open in browser)" && \
+		echo "  - PNG charts for performance visualization" && \
+		ls -lh workflows/minio/results/*.png 2>/dev/null | tail -5; \
+	else \
+		echo "No results directory found. Run 'make minio-warp' first."; \
+	fi
+
+minio-help:
+	@echo "MinIO Warp S3 benchmarking targets:"
+	@echo ""
+	@echo "minio                   - Install and setup MinIO server"
+	@echo "minio-install           - Install and setup MinIO server"
+	@echo "minio-uninstall         - Stop and remove MinIO containers"
+	@echo "minio-destroy           - Remove MinIO containers and clean up data"
+	@echo "minio-warp              - Run MinIO Warp benchmarks"
+	@echo "minio-results           - Collect and analyze benchmark results"
+	@echo ""
+	@echo "Example usage:"
+	@echo "  make defconfig-minio-warp    # Configure for Warp benchmarking"
+	@echo "  make bringup                 # Setup test nodes"
+	@echo "  make minio                   # Install MinIO server"
+	@echo "  make minio-warp              # Run benchmarks"
+	@echo "  make minio-results           # Generate analysis and visualizations"
+	@echo ""
+	@echo "Visualization options:"
+	@echo "  - Enable MINIO_WARP_ENABLE_WEB_UI in menuconfig for real-time monitoring"
+	@echo "  - Access web UI at http://node-ip:7762 during benchmarks"
+	@echo "  - View HTML report: workflows/minio/results/warp_benchmark_report.html"
+
+.PHONY: $(MINIO_DATA_TARGET) $(MINIO_DATA_TARGET_INSTALL) $(MINIO_DATA_TARGET_UNINSTALL)
+.PHONY: $(MINIO_DATA_TARGET_DESTROY) $(MINIO_DATA_TARGET_RUN) $(MINIO_DATA_TARGET_RESULTS)
+.PHONY: minio-help
diff --git a/workflows/minio/scripts/analyze_warp_results.py b/workflows/minio/scripts/analyze_warp_results.py
new file mode 100755
index 00000000..c20c57dd
--- /dev/null
+++ b/workflows/minio/scripts/analyze_warp_results.py
@@ -0,0 +1,858 @@
+#!/usr/bin/env python3
+"""
+Analyze MinIO Warp benchmark results and generate reports with visualizations.
+"""
+
+import json
+import glob
+import os
+import sys
+from pathlib import Path
+from datetime import datetime
+import matplotlib.pyplot as plt
+import matplotlib.patches as mpatches
+import numpy as np
+from typing import Dict, List, Any
+
+
+def load_warp_results(results_dir: Path) -> List[Dict[str, Any]]:
+    """Load all Warp JSON result files from the results directory."""
+    results = []
+    json_files = list(results_dir.glob("warp_benchmark_*.json"))
+
+    for json_file in sorted(json_files):
+        try:
+            with open(json_file, "r") as f:
+                content = f.read()
+                # Find where the JSON starts (after any terminal output)
+                json_start = content.find("{")
+                if json_start >= 0:
+                    json_content = content[json_start:]
+                    data = json.loads(json_content)
+                    data["_filename"] = json_file.name
+                    data["_filepath"] = str(json_file)
+                    results.append(data)
+                    print(f"Loaded: {json_file.name}")
+                else:
+                    print(f"No JSON found in {json_file}")
+        except Exception as e:
+            print(f"Error loading {json_file}: {e}")
+
+    return results
+
+
+def extract_metrics(result: Dict[str, Any]) -> Dict[str, Any]:
+    """Extract key metrics from a Warp result."""
+    metrics = {
+        "filename": result.get("_filename", "unknown"),
+        "timestamp": "",
+        "operation": "mixed",
+    }
+
+    # Check if we have the total stats
+    if "total" in result:
+        total = result["total"]
+
+        # Extract basic info
+        metrics["timestamp"] = total.get("start_time", "")
+        metrics["total_requests"] = total.get("total_requests", 0)
+        metrics["total_objects"] = total.get("total_objects", 0)
+        metrics["total_errors"] = total.get("total_errors", 0)
+        metrics["total_bytes"] = total.get("total_bytes", 0)
+        metrics["concurrency"] = total.get("concurrency", 0)
+
+        # Calculate duration in seconds
+        start_time = total.get("start_time", "")
+        end_time = total.get("end_time", "")
+        if start_time and end_time:
+            try:
+                from dateutil import parser
+
+                start = parser.parse(start_time)
+                end = parser.parse(end_time)
+                duration = (end - start).total_seconds()
+                metrics["duration_seconds"] = duration
+            except ImportError:
+                # Fall back to simple parsing if dateutil not available
+                metrics["duration_seconds"] = 105  # Default for 5m test
+
+        # Get throughput if directly available
+        if "throughput" in total and isinstance(total["throughput"], dict):
+            # Throughput is a complex structure with segmented data
+            tp = total["throughput"]
+            if "bytes" in tp:
+                bytes_total = tp["bytes"]
+                duration_ms = tp.get("measure_duration_millis", 1000)
+                duration_s = duration_ms / 1000
+                if duration_s > 0:
+                    metrics["throughput_avg_mbps"] = (
+                        bytes_total / (1024 * 1024)
+                    ) / duration_s
+            elif "segmented" in tp:
+                # Use median throughput
+                metrics["throughput_avg_mbps"] = tp["segmented"].get(
+                    "median_bps", 0
+                ) / (1024 * 1024)
+        elif metrics.get("duration_seconds", 0) > 0 and metrics["total_bytes"] > 0:
+            # Calculate throughput from bytes and duration
+            metrics["throughput_avg_mbps"] = (
+                metrics["total_bytes"] / (1024 * 1024)
+            ) / metrics["duration_seconds"]
+
+        # Calculate operations per second
+        if metrics.get("duration_seconds", 0) > 0:
+            metrics["ops_per_second"] = (
+                metrics["total_requests"] / metrics["duration_seconds"]
+            )
+
+    # Check for operations breakdown by type
+    if "by_op_type" in result:
+        ops = result["by_op_type"]
+
+        # Process each operation type
+        for op_type in ["GET", "PUT", "DELETE", "STAT"]:
+            if op_type in ops:
+                op_data = ops[op_type]
+                op_lower = op_type.lower()
+
+                # Extract operation count
+                if "ops" in op_data:
+                    metrics[f"{op_lower}_requests"] = op_data["ops"]
+
+                # Extract average duration
+                if "avg_duration" in op_data:
+                    metrics[f"{op_lower}_latency_avg_ms"] = (
+                        op_data["avg_duration"] / 1e6
+                    )
+
+                # Extract percentiles if available
+                if "percentiles_millis" in op_data:
+                    percentiles = op_data["percentiles_millis"]
+                    if "50" in percentiles:
+                        metrics[f"{op_lower}_latency_p50"] = percentiles["50"]
+                    if "90" in percentiles:
+                        metrics[f"{op_lower}_latency_p90"] = percentiles["90"]
+                    if "99" in percentiles:
+                        metrics[f"{op_lower}_latency_p99"] = percentiles["99"]
+
+                # Extract min/max if available
+                if "fastest_millis" in op_data:
+                    metrics[f"{op_lower}_latency_min"] = op_data["fastest_millis"]
+                if "slowest_millis" in op_data:
+                    metrics[f"{op_lower}_latency_max"] = op_data["slowest_millis"]
+
+        # Calculate aggregate latency metrics
+        latencies = []
+        for op in ["get", "put", "delete"]:
+            if f"{op}_latency_avg_ms" in metrics:
+                latencies.append(metrics[f"{op}_latency_avg_ms"])
+        if latencies:
+            metrics["latency_avg_ms"] = sum(latencies) / len(latencies)
+
+    # Extract from summary if present
+    if "summary" in result:
+        summary = result["summary"]
+        if "throughput_MiBs" in summary:
+            metrics["throughput_avg_mbps"] = summary["throughput_MiBs"]
+        if "ops_per_sec" in summary:
+            metrics["ops_per_second"] = summary["ops_per_sec"]
+
+    return metrics
+
+
+def generate_throughput_chart(metrics_list: List[Dict[str, Any]], output_dir: Path):
+    """Generate throughput comparison chart."""
+    if not metrics_list:
+        return
+
+    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))
+
+    # Throughput bar chart
+    labels = [
+        m["filename"].replace("warp_benchmark_", "").replace(".json", "")[:20]
+        for m in metrics_list
+    ]
+    x = np.arange(len(labels))
+
+    avg_throughput = [m.get("throughput_avg_mbps", 0) for m in metrics_list]
+
+    width = 0.35
+    ax1.bar(x, avg_throughput, width, label="Throughput", color="skyblue")
+
+    ax1.set_xlabel("Test Run")
+    ax1.set_ylabel("Throughput (MB/s)")
+    ax1.set_title("MinIO Warp Throughput Performance")
+    ax1.set_xticks(x)
+    ax1.set_xticklabels(labels, rotation=45, ha="right")
+    ax1.legend()
+    ax1.grid(True, alpha=0.3)
+
+    # Operations per second
+    ops_per_sec = [m.get("ops_per_second", 0) for m in metrics_list]
+    ax2.bar(x, ops_per_sec, color="orange")
+    ax2.set_xlabel("Test Run")
+    ax2.set_ylabel("Operations/Second")
+    ax2.set_title("Operations Per Second")
+    ax2.set_xticks(x)
+    ax2.set_xticklabels(labels, rotation=45, ha="right")
+    ax2.grid(True, alpha=0.3)
+
+    plt.tight_layout()
+    output_file = output_dir / "warp_throughput_performance.png"
+    plt.savefig(output_file, dpi=150, bbox_inches="tight")
+    plt.close()
+    print(f"Generated: {output_file}")
+
+
+def generate_latency_chart(metrics_list: List[Dict[str, Any]], output_dir: Path):
+    """Generate latency comparison chart."""
+    if not metrics_list:
+        return
+
+    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))
+
+    labels = [
+        m["filename"].replace("warp_benchmark_", "").replace(".json", "")[:20]
+        for m in metrics_list
+    ]
+    x = np.arange(len(labels))
+
+    # Collect latency data by operation type
+    operations = ["get", "put", "delete"]
+    colors = {"get": "steelblue", "put": "orange", "delete": "red"}
+
+    # Operation-specific latencies
+    width = 0.2
+    offset = -width
+    for op in operations:
+        op_latencies = []
+        for m in metrics_list:
+            # Try operation-specific latency first, then fall back to general
+            lat = m.get(f"{op}_latency_avg_ms", m.get("latency_avg_ms", 0))
+            op_latencies.append(lat)
+
+        if any(l > 0 for l in op_latencies):
+            ax1.bar(x + offset, op_latencies, width, label=op.upper(), color=colors[op])
+            offset += width
+
+    ax1.set_xlabel("Test Run")
+    ax1.set_ylabel("Latency (ms)")
+    ax1.set_title("Request Latency Distribution")
+    ax1.set_xticks(x)
+    ax1.set_xticklabels(labels, rotation=45, ha="right")
+    ax1.legend()
+    ax1.grid(True, alpha=0.3)
+
+    # Min/Max latency range
+    lat_min = [m.get("latency_min_ms", 0) for m in metrics_list]
+    lat_max = [m.get("latency_max_ms", 0) for m in metrics_list]
+
+    ax2.bar(x - width / 2, lat_min, width, label="Min", color="green")
+    ax2.bar(x + width / 2, lat_max, width, label="Max", color="red")
+
+    ax2.set_xlabel("Test Run")
+    ax2.set_ylabel("Latency (ms)")
+    ax2.set_title("Latency Range (Min/Max)")
+    ax2.set_xticks(x)
+    ax2.set_xticklabels(labels, rotation=45, ha="right")
+    ax2.legend()
+    ax2.grid(True, alpha=0.3)
+
+    plt.tight_layout()
+    output_file = output_dir / "warp_latency_analysis.png"
+    plt.savefig(output_file, dpi=150, bbox_inches="tight")
+    plt.close()
+    print(f"Generated: {output_file}")
+
+
+def generate_performance_summary_chart(
+    metrics_list: List[Dict[str, Any]], output_dir: Path
+):
+    """Generate a comprehensive performance summary chart."""
+    if not metrics_list:
+        return
+
+    fig = plt.figure(figsize=(16, 10))
+    gs = fig.add_gridspec(3, 2, hspace=0.3, wspace=0.25)
+
+    # Throughput over time
+    ax1 = fig.add_subplot(gs[0, :])
+    timestamps = []
+    throughputs = []
+    for m in metrics_list:
+        try:
+            if m.get("timestamp"):
+                timestamps.append(
+                    datetime.fromisoformat(m["timestamp"].replace("Z", "+00:00"))
+                )
+                throughputs.append(m.get("throughput_avg_mbps", 0))
+        except:
+            pass
+
+    if timestamps:
+        ax1.plot(timestamps, throughputs, "o-", linewidth=2, markersize=8, color="blue")
+        ax1.set_xlabel("Time")
+        ax1.set_ylabel("Throughput (MB/s)")
+        ax1.set_title("Throughput Over Time", fontsize=14, fontweight="bold")
+        ax1.grid(True, alpha=0.3)
+        ax1.tick_params(axis="x", rotation=45)
+
+    # Operations distribution
+    ax2 = fig.add_subplot(gs[1, 0])
+    ops_data = [m.get("ops_per_second", 0) for m in metrics_list]
+    if ops_data:
+        ax2.hist(ops_data, bins=10, color="orange", edgecolor="black", alpha=0.7)
+        ax2.set_xlabel("Operations/Second")
+        ax2.set_ylabel("Frequency")
+        ax2.set_title("Operations Distribution", fontsize=12, fontweight="bold")
+        ax2.grid(True, alpha=0.3)
+
+    # Latency box plot
+    ax3 = fig.add_subplot(gs[1, 1])
+    latency_data = []
+    for m in metrics_list:
+        lat_data = []
+        if m.get("latency_avg_ms"):
+            lat_data.extend(
+                [
+                    m.get("latency_min_ms", 0),
+                    m.get("latency_percentile_50", 0),
+                    m.get("latency_avg_ms", 0),
+                    m.get("latency_percentile_99", 0),
+                    m.get("latency_max_ms", 0),
+                ]
+            )
+        if lat_data:
+            latency_data.append(lat_data)
+
+    if latency_data:
+        ax3.boxplot(latency_data)
+        ax3.set_xlabel("Test Run")
+        ax3.set_ylabel("Latency (ms)")
+        ax3.set_title("Latency Distribution", fontsize=12, fontweight="bold")
+        ax3.grid(True, alpha=0.3)
+
+    # Performance metrics table
+    ax4 = fig.add_subplot(gs[2, :])
+    ax4.axis("tight")
+    ax4.axis("off")
+
+    # Create summary statistics
+    if metrics_list:
+        avg_metrics = metrics_list[-1]  # Use most recent for now
+        table_data = [
+            ["Metric", "Value"],
+            [
+                "Average Throughput",
+                f"{avg_metrics.get('throughput_avg_mbps', 0):.2f} MB/s",
+            ],
+            ["Operations/Second", f"{avg_metrics.get('ops_per_second', 0):.0f}"],
+            ["Average Latency", f"{avg_metrics.get('latency_avg_ms', 0):.2f} ms"],
+            ["P99 Latency", f"{avg_metrics.get('latency_percentile_99', 0):.2f} ms"],
+            ["Total Operations", f"{avg_metrics.get('ops_total', 0):.0f}"],
+            ["Object Size", str(avg_metrics.get("object_size", "unknown"))],
+            ["Error Rate", f"{avg_metrics.get('error_rate', 0):.2%}"],
+        ]
+
+        table = ax4.table(
+            cellText=table_data, cellLoc="left", loc="center", colWidths=[0.3, 0.3]
+        )
+        table.auto_set_font_size(False)
+        table.set_fontsize(10)
+        table.scale(1, 1.5)
+
+        # Style the header row
+        for i in range(2):
+            table[(0, i)].set_facecolor("#40466e")
+            table[(0, i)].set_text_props(weight="bold", color="white")
+
+    plt.suptitle("MinIO Warp Performance Summary", fontsize=16, fontweight="bold")
+
+    output_file = output_dir / "warp_performance_summary.png"
+    plt.savefig(output_file, dpi=150, bbox_inches="tight")
+    plt.close()
+    print(f"Generated: {output_file}")
+
+
+def generate_text_report(metrics_list: List[Dict[str, Any]], output_dir: Path):
+    """Generate a detailed text report."""
+    output_file = output_dir / "warp_analysis_report.txt"
+
+    with open(output_file, "w") as f:
+        f.write("=" * 80 + "\n")
+        f.write("MinIO Warp Benchmark Analysis Report\n")
+        f.write("=" * 80 + "\n\n")
+        f.write(f"Generated: {datetime.now().isoformat()}\n")
+        f.write(f"Total test runs analyzed: {len(metrics_list)}\n\n")
+
+        if not metrics_list:
+            f.write("No benchmark results found.\n")
+            return
+
+        # Overall statistics
+        f.write("OVERALL PERFORMANCE STATISTICS\n")
+        f.write("-" * 40 + "\n")
+
+        throughputs = [
+            m.get("throughput_avg_mbps", 0)
+            for m in metrics_list
+            if m.get("throughput_avg_mbps")
+        ]
+        if throughputs:
+            f.write(f"Throughput:\n")
+            f.write(f"  Average: {np.mean(throughputs):.2f} MB/s\n")
+            f.write(f"  Median:  {np.median(throughputs):.2f} MB/s\n")
+            f.write(f"  Min:     {np.min(throughputs):.2f} MB/s\n")
+            f.write(f"  Max:     {np.max(throughputs):.2f} MB/s\n")
+            f.write(f"  StdDev:  {np.std(throughputs):.2f} MB/s\n\n")
+
+        ops_rates = [
+            m.get("ops_per_second", 0) for m in metrics_list if m.get("ops_per_second")
+        ]
+        if ops_rates:
+            f.write(f"Operations per Second:\n")
+            f.write(f"  Average: {np.mean(ops_rates):.0f} ops/s\n")
+            f.write(f"  Median:  {np.median(ops_rates):.0f} ops/s\n")
+            f.write(f"  Min:     {np.min(ops_rates):.0f} ops/s\n")
+            f.write(f"  Max:     {np.max(ops_rates):.0f} ops/s\n\n")
+
+        latencies = [
+            m.get("latency_avg_ms", 0) for m in metrics_list if m.get("latency_avg_ms")
+        ]
+        if latencies:
+            f.write(f"Average Latency:\n")
+            f.write(f"  Mean:    {np.mean(latencies):.2f} ms\n")
+            f.write(f"  Median:  {np.median(latencies):.2f} ms\n")
+            f.write(f"  Min:     {np.min(latencies):.2f} ms\n")
+            f.write(f"  Max:     {np.max(latencies):.2f} ms\n\n")
+
+        # Individual test run details
+        f.write("=" * 80 + "\n")
+        f.write("INDIVIDUAL TEST RUN DETAILS\n")
+        f.write("=" * 80 + "\n\n")
+
+        for i, metrics in enumerate(metrics_list, 1):
+            f.write(f"Test Run #{i}\n")
+            f.write("-" * 40 + "\n")
+            f.write(f"File: {metrics.get('filename', 'unknown')}\n")
+            f.write(f"Timestamp: {metrics.get('timestamp', 'N/A')}\n")
+            f.write(f"Operation: {metrics.get('operation', 'unknown')}\n")
+            f.write(f"Duration: {metrics.get('duration_seconds', 0):.1f} seconds\n")
+            f.write(f"Object Size: {metrics.get('object_size', 'unknown')}\n")
+            f.write(f"Total Objects: {metrics.get('objects_total', 0)}\n")
+
+            if metrics.get("throughput_avg_mbps"):
+                f.write(f"\nThroughput Performance:\n")
+                f.write(
+                    f"  Average: {metrics.get('throughput_avg_mbps', 0):.2f} MB/s\n"
+                )
+                f.write(
+                    f"  Min:     {metrics.get('throughput_min_mbps', 0):.2f} MB/s\n"
+                )
+                f.write(
+                    f"  Max:     {metrics.get('throughput_max_mbps', 0):.2f} MB/s\n"
+                )
+                f.write(
+                    f"  P50:     {metrics.get('throughput_percentile_50', 0):.2f} MB/s\n"
+                )
+                f.write(
+                    f"  P99:     {metrics.get('throughput_percentile_99', 0):.2f} MB/s\n"
+                )
+
+            if metrics.get("ops_per_second"):
+                f.write(f"\nOperations Performance:\n")
+                f.write(f"  Total Operations: {metrics.get('ops_total', 0):.0f}\n")
+                f.write(
+                    f"  Operations/Second: {metrics.get('ops_per_second', 0):.0f}\n"
+                )
+                f.write(
+                    f"  Avg Duration: {metrics.get('ops_avg_duration_ms', 0):.2f} ms\n"
+                )
+
+            if metrics.get("latency_avg_ms"):
+                f.write(f"\nLatency Metrics:\n")
+                f.write(f"  Average: {metrics.get('latency_avg_ms', 0):.2f} ms\n")
+                f.write(f"  Min:     {metrics.get('latency_min_ms', 0):.2f} ms\n")
+                f.write(f"  Max:     {metrics.get('latency_max_ms', 0):.2f} ms\n")
+                f.write(
+                    f"  P50:     {metrics.get('latency_percentile_50', 0):.2f} ms\n"
+                )
+                f.write(
+                    f"  P99:     {metrics.get('latency_percentile_99', 0):.2f} ms\n"
+                )
+
+            if metrics.get("error_count", 0) > 0:
+                f.write(f"\nErrors:\n")
+                f.write(f"  Error Count: {metrics.get('error_count', 0)}\n")
+                f.write(f"  Error Rate: {metrics.get('error_rate', 0):.2%}\n")
+
+            f.write("\n")
+
+        f.write("=" * 80 + "\n")
+        f.write("END OF REPORT\n")
+        f.write("=" * 80 + "\n")
+
+    print(f"Generated: {output_file}")
+
+
+def generate_html_report(metrics_list: List[Dict[str, Any]], output_dir: Path):
+    """Generate a comprehensive HTML report with embedded visualizations."""
+    output_file = output_dir / "warp_benchmark_report.html"
+
+    # Check if PNG files exist
+    throughput_png = output_dir / "warp_throughput_performance.png"
+    latency_png = output_dir / "warp_latency_analysis.png"
+    summary_png = output_dir / "warp_performance_summary.png"
+
+    with open(output_file, "w") as f:
+        f.write(
+            """<!DOCTYPE html>
+<html>
+<head>
+    <meta charset="UTF-8">
+    <title>MinIO Warp Benchmark Report</title>
+    <style>
+        body {
+            font-family: 'Segoe UI', Arial, sans-serif;
+            max-width: 1400px;
+            margin: 0 auto;
+            padding: 20px;
+            background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
+            min-height: 100vh;
+        }
+        .container {
+            background: white;
+            border-radius: 15px;
+            box-shadow: 0 20px 60px rgba(0,0,0,0.3);
+            padding: 40px;
+        }
+        h1 {
+            color: #2c3e50;
+            text-align: center;
+            font-size: 2.5em;
+            margin-bottom: 10px;
+            text-shadow: 2px 2px 4px rgba(0,0,0,0.1);
+        }
+        .subtitle {
+            text-align: center;
+            color: #7f8c8d;
+            margin-bottom: 30px;
+        }
+        .summary-grid {
+            display: grid;
+            grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
+            gap: 20px;
+            margin: 30px 0;
+        }
+        .metric-card {
+            background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
+            color: white;
+            padding: 20px;
+            border-radius: 10px;
+            text-align: center;
+            box-shadow: 0 5px 15px rgba(0,0,0,0.2);
+        }
+        .metric-value {
+            font-size: 2em;
+            font-weight: bold;
+            margin: 10px 0;
+        }
+        .metric-label {
+            font-size: 0.9em;
+            opacity: 0.9;
+        }
+        .section {
+            margin: 40px 0;
+        }
+        .section h2 {
+            color: #34495e;
+            border-bottom: 2px solid #667eea;
+            padding-bottom: 10px;
+            margin-bottom: 20px;
+        }
+        table {
+            width: 100%;
+            border-collapse: collapse;
+            margin: 20px 0;
+        }
+        th {
+            background: #667eea;
+            color: white;
+            padding: 12px;
+            text-align: left;
+        }
+        td {
+            padding: 10px;
+            border-bottom: 1px solid #ecf0f1;
+        }
+        tr:hover {
+            background: #f8f9fa;
+        }
+        .chart-container {
+            text-align: center;
+            margin: 30px 0;
+        }
+        .chart-container img {
+            max-width: 100%;
+            height: auto;
+            border-radius: 10px;
+            box-shadow: 0 5px 15px rgba(0,0,0,0.1);
+        }
+        .performance-good {
+            color: #27ae60;
+            font-weight: bold;
+        }
+        .performance-warning {
+            color: #f39c12;
+            font-weight: bold;
+        }
+        .performance-bad {
+            color: #e74c3c;
+            font-weight: bold;
+        }
+        .footer {
+            text-align: center;
+            margin-top: 40px;
+            padding-top: 20px;
+            border-top: 1px solid #ecf0f1;
+            color: #7f8c8d;
+        }
+    </style>
+</head>
+<body>
+    <div class="container">
+        <h1>🚀 MinIO Warp Benchmark Report</h1>
+        <div class="subtitle">Generated on """
+            + datetime.now().strftime("%Y-%m-%d %H:%M:%S")
+            + """</div>
+
+        <div class="section">
+            <h2>📊 Performance Summary</h2>
+            <div class="summary-grid">
+"""
+        )
+
+        if metrics_list:
+            # Calculate summary statistics
+            throughputs = [
+                m.get("throughput_avg_mbps", 0)
+                for m in metrics_list
+                if m.get("throughput_avg_mbps")
+            ]
+            ops_rates = [
+                m.get("ops_per_second", 0)
+                for m in metrics_list
+                if m.get("ops_per_second")
+            ]
+            latencies = [
+                m.get("latency_avg_ms", 0)
+                for m in metrics_list
+                if m.get("latency_avg_ms")
+            ]
+
+            if throughputs:
+                avg_throughput = np.mean(throughputs)
+                f.write(
+                    f"""
+                <div class="metric-card">
+                    <div class="metric-label">Average Throughput</div>
+                    <div class="metric-value">{avg_throughput:.1f}</div>
+                    <div class="metric-label">MB/s</div>
+                </div>
+                """
+                )
+
+                f.write(
+                    f"""
+                <div class="metric-card">
+                    <div class="metric-label">Peak Throughput</div>
+                    <div class="metric-value">{np.max(throughputs):.1f}</div>
+                    <div class="metric-label">MB/s</div>
+                </div>
+                """
+                )
+
+            if ops_rates:
+                f.write(
+                    f"""
+                <div class="metric-card">
+                    <div class="metric-label">Avg Operations</div>
+                    <div class="metric-value">{np.mean(ops_rates):.0f}</div>
+                    <div class="metric-label">ops/second</div>
+                </div>
+                """
+                )
+
+            if latencies:
+                f.write(
+                    f"""
+                <div class="metric-card">
+                    <div class="metric-label">Avg Latency</div>
+                    <div class="metric-value">{np.mean(latencies):.1f}</div>
+                    <div class="metric-label">ms</div>
+                </div>
+                """
+                )
+
+            f.write(
+                f"""
+                <div class="metric-card">
+                    <div class="metric-label">Test Runs</div>
+                    <div class="metric-value">{len(metrics_list)}</div>
+                    <div class="metric-label">completed</div>
+                </div>
+            """
+            )
+
+        f.write(
+            """
+            </div>
+        </div>
+
+        <div class="section">
+            <h2>📈 Performance Visualizations</h2>
+        """
+        )
+
+        if throughput_png.exists():
+            f.write(
+                f"""
+            <div class="chart-container">
+                <img src="{throughput_png.name}" alt="Throughput Performance">
+            </div>
+            """
+            )
+
+        if latency_png.exists():
+            f.write(
+                f"""
+            <div class="chart-container">
+                <img src="{latency_png.name}" alt="Latency Analysis">
+            </div>
+            """
+            )
+
+        if summary_png.exists():
+            f.write(
+                f"""
+            <div class="chart-container">
+                <img src="{summary_png.name}" alt="Performance Summary">
+            </div>
+            """
+            )
+
+        f.write(
+            """
+        <div class="section">
+            <h2>📋 Detailed Results</h2>
+            <table>
+                <thead>
+                    <tr>
+                        <th>Test Run</th>
+                        <th>Operation</th>
+                        <th>Throughput (MB/s)</th>
+                        <th>Ops/Second</th>
+                        <th>Avg Latency (ms)</th>
+                        <th>P99 Latency (ms)</th>
+                        <th>Errors</th>
+                    </tr>
+                </thead>
+                <tbody>
+        """
+        )
+
+        for metrics in metrics_list:
+            throughput = metrics.get("throughput_avg_mbps", 0)
+            ops_sec = metrics.get("ops_per_second", 0)
+            latency = metrics.get("latency_avg_ms", 0)
+            p99_lat = metrics.get("latency_percentile_99", 0)
+            errors = metrics.get("error_count", 0)
+
+            # Color code based on performance
+            throughput_class = (
+                "performance-good"
+                if throughput > 100
+                else "performance-warning" if throughput > 50 else "performance-bad"
+            )
+            latency_class = (
+                "performance-good"
+                if latency < 10
+                else "performance-warning" if latency < 50 else "performance-bad"
+            )
+
+            f.write(
+                f"""
+                <tr>
+                    <td>{metrics.get('filename', 'unknown').replace('warp_benchmark_', '').replace('.json', '')}</td>
+                    <td>{metrics.get('operation', 'mixed')}</td>
+                    <td class="{throughput_class}">{throughput:.2f}</td>
+                    <td>{ops_sec:.0f}</td>
+                    <td class="{latency_class}">{latency:.2f}</td>
+                    <td>{p99_lat:.2f}</td>
+                    <td>{errors}</td>
+                </tr>
+            """
+            )
+
+        f.write(
+            """
+                </tbody>
+            </table>
+        </div>
+
+        <div class="footer">
+            <p>MinIO Warp Benchmark Analysis | Generated by kdevops</p>
+        </div>
+    </div>
+</body>
+</html>
+        """
+        )
+
+    print(f"Generated: {output_file}")
+
+
+def main():
+    """Main analysis function."""
+    # Determine results directory
+    script_dir = Path(__file__).parent
+    results_dir = script_dir.parent / "results"
+
+    if not results_dir.exists():
+        print(f"Results directory not found: {results_dir}")
+        return 1
+
+    # Load all results
+    results = load_warp_results(results_dir)
+    if not results:
+        print("No Warp benchmark results found.")
+        return 1
+
+    print(f"\nFound {len(results)} benchmark result(s)")
+
+    # Extract metrics from each result
+    metrics_list = [extract_metrics(result) for result in results]
+
+    # Generate visualizations
+    print("\nGenerating visualizations...")
+    generate_throughput_chart(metrics_list, results_dir)
+    generate_latency_chart(metrics_list, results_dir)
+    generate_performance_summary_chart(metrics_list, results_dir)
+
+    # Generate reports
+    print("\nGenerating reports...")
+    generate_text_report(metrics_list, results_dir)
+    generate_html_report(metrics_list, results_dir)
+
+    print("\n✅ Analysis complete! Check the results directory for:")
+    print("  - warp_throughput_performance.png")
+    print("  - warp_latency_analysis.png")
+    print("  - warp_performance_summary.png")
+    print("  - warp_analysis_report.txt")
+    print("  - warp_benchmark_report.html")
+
+    return 0
+
+
+if __name__ == "__main__":
+    sys.exit(main())
diff --git a/workflows/minio/scripts/generate_warp_report.py b/workflows/minio/scripts/generate_warp_report.py
new file mode 100755
index 00000000..2eff5222
--- /dev/null
+++ b/workflows/minio/scripts/generate_warp_report.py
@@ -0,0 +1,404 @@
+#!/usr/bin/env python3
+"""
+Generate graphs and HTML report from MinIO Warp benchmark results
+"""
+
+import json
+import os
+import sys
+import glob
+from datetime import datetime
+import matplotlib.pyplot as plt
+import matplotlib.dates as mdates
+from pathlib import Path
+
+
+def parse_warp_json(json_file):
+    """Parse Warp benchmark JSON output"""
+    with open(json_file, "r") as f:
+        content = f.read()
+        # Find the JSON object in the output (skip any non-JSON prefix)
+        json_start = content.find("{")
+        if json_start == -1:
+            raise ValueError(f"No JSON found in {json_file}")
+        json_content = content[json_start:]
+        return json.loads(json_content)
+
+
+def generate_throughput_graph(data, output_dir, filename_prefix):
+    """Generate throughput over time graph"""
+    segments = data["total"]["throughput"]["segmented"]["segments"]
+
+    # Extract timestamps and throughput values
+    times = []
+    throughput_mbps = []
+    ops_per_sec = []
+
+    for segment in segments:
+        time_str = segment["start"]
+        # Parse timestamp
+        timestamp = datetime.fromisoformat(time_str.replace("-07:00", "+00:00"))
+        times.append(timestamp)
+        throughput_mbps.append(
+            segment["bytes_per_sec"] / (1024 * 1024)
+        )  # Convert to MB/s
+        ops_per_sec.append(segment["obj_per_sec"])
+
+    # Create figure with two subplots
+    fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 8))
+
+    # Throughput graph
+    ax1.plot(times, throughput_mbps, "b-", linewidth=2, marker="o")
+    ax1.set_ylabel("Throughput (MB/s)", fontsize=12)
+    ax1.set_title("MinIO Warp Benchmark - Throughput Over Time", fontsize=14)
+    ax1.grid(True, alpha=0.3)
+    ax1.xaxis.set_major_formatter(mdates.DateFormatter("%H:%M:%S"))
+
+    # Add average line
+    avg_throughput = sum(throughput_mbps) / len(throughput_mbps)
+    ax1.axhline(
+        y=avg_throughput,
+        color="r",
+        linestyle="--",
+        alpha=0.7,
+        label=f"Average: {avg_throughput:.1f} MB/s",
+    )
+    ax1.legend()
+
+    # Operations per second graph
+    ax2.plot(times, ops_per_sec, "g-", linewidth=2, marker="s")
+    ax2.set_xlabel("Time", fontsize=12)
+    ax2.set_ylabel("Operations/sec", fontsize=12)
+    ax2.set_title("Operations Per Second", fontsize=14)
+    ax2.grid(True, alpha=0.3)
+    ax2.xaxis.set_major_formatter(mdates.DateFormatter("%H:%M:%S"))
+
+    # Add average line
+    avg_ops = sum(ops_per_sec) / len(ops_per_sec)
+    ax2.axhline(
+        y=avg_ops,
+        color="r",
+        linestyle="--",
+        alpha=0.7,
+        label=f"Average: {avg_ops:.1f} ops/s",
+    )
+    ax2.legend()
+
+    plt.gcf().autofmt_xdate()
+    plt.tight_layout()
+
+    graph_file = os.path.join(output_dir, f"{filename_prefix}_throughput.png")
+    plt.savefig(graph_file, dpi=100, bbox_inches="tight")
+    plt.close()
+
+    return graph_file
+
+
+def generate_operation_stats_graph(data, output_dir, filename_prefix):
+    """Generate operation statistics bar chart"""
+    operations = data.get("operations", {})
+
+    if not operations:
+        return None
+
+    op_types = []
+    throughputs = []
+    latencies = []
+
+    for op_type, op_data in operations.items():
+        if op_type in ["DELETE", "GET", "PUT", "STAT"]:
+            op_types.append(op_type)
+            if "throughput" in op_data:
+                throughputs.append(op_data["throughput"]["obj_per_sec"])
+            if "latency" in op_data:
+                # Convert nanoseconds to milliseconds
+                latencies.append(op_data["latency"]["mean"] / 1_000_000)
+
+    if not op_types:
+        return None
+
+    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))
+
+    # Throughput bar chart
+    if throughputs:
+        ax1.bar(op_types, throughputs, color=["blue", "green", "red", "orange"])
+        ax1.set_ylabel("Operations/sec", fontsize=12)
+        ax1.set_title("Operation Throughput", fontsize=14)
+        ax1.grid(True, axis="y", alpha=0.3)
+
+    # Latency bar chart
+    if latencies:
+        ax2.bar(op_types, latencies, color=["blue", "green", "red", "orange"])
+        ax2.set_ylabel("Latency (ms)", fontsize=12)
+        ax2.set_title("Operation Latency (Mean)", fontsize=14)
+        ax2.grid(True, axis="y", alpha=0.3)
+
+    plt.tight_layout()
+
+    graph_file = os.path.join(output_dir, f"{filename_prefix}_operations.png")
+    plt.savefig(graph_file, dpi=100, bbox_inches="tight")
+    plt.close()
+
+    return graph_file
+
+
+def generate_html_report(json_files, output_dir):
+    """Generate HTML report with all results"""
+    timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
+
+    html_content = f"""<!DOCTYPE html>
+<html lang="en">
+<head>
+    <meta charset="UTF-8">
+    <meta name="viewport" content="width=device-width, initial-scale=1.0">
+    <title>MinIO Warp Benchmark Results</title>
+    <style>
+        body {{
+            font-family: Arial, sans-serif;
+            margin: 20px;
+            background-color: #f5f5f5;
+        }}
+        h1 {{
+            color: #333;
+            border-bottom: 3px solid #4CAF50;
+            padding-bottom: 10px;
+        }}
+        h2 {{
+            color: #666;
+            margin-top: 30px;
+        }}
+        .result-section {{
+            background-color: white;
+            padding: 20px;
+            margin-bottom: 30px;
+            border-radius: 8px;
+            box-shadow: 0 2px 4px rgba(0,0,0,0.1);
+        }}
+        .stats-grid {{
+            display: grid;
+            grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
+            gap: 20px;
+            margin: 20px 0;
+        }}
+        .stat-card {{
+            background-color: #f9f9f9;
+            padding: 15px;
+            border-radius: 5px;
+            border-left: 4px solid #4CAF50;
+        }}
+        .stat-label {{
+            color: #666;
+            font-size: 0.9em;
+        }}
+        .stat-value {{
+            font-size: 1.5em;
+            font-weight: bold;
+            color: #333;
+            margin-top: 5px;
+        }}
+        img {{
+            max-width: 100%;
+            height: auto;
+            margin: 20px 0;
+            border: 1px solid #ddd;
+            border-radius: 5px;
+        }}
+        table {{
+            width: 100%;
+            border-collapse: collapse;
+            margin: 20px 0;
+        }}
+        th, td {{
+            padding: 10px;
+            text-align: left;
+            border-bottom: 1px solid #ddd;
+        }}
+        th {{
+            background-color: #4CAF50;
+            color: white;
+        }}
+        tr:hover {{
+            background-color: #f5f5f5;
+        }}
+        .timestamp {{
+            color: #666;
+            font-style: italic;
+        }}
+    </style>
+</head>
+<body>
+    <h1>MinIO Warp Benchmark Results</h1>
+    <p class="timestamp">Generated: {timestamp}</p>
+"""
+
+    for json_file in sorted(json_files, reverse=True):
+        try:
+            data = parse_warp_json(json_file)
+            filename = os.path.basename(json_file)
+            filename_prefix = filename.replace(".json", "")
+
+            # Extract key metrics
+            total = data["total"]
+            total_requests = total.get("total_requests", 0)
+            total_objects = total.get("total_objects", 0)
+            total_errors = total.get("total_errors", 0)
+            total_bytes = total.get("total_bytes", 0)
+            concurrency = total.get("concurrency", 0)
+
+            throughput_data = total.get("throughput", {}).get("segmented", {})
+            fastest_bps = throughput_data.get("fastest_bps", 0) / (1024 * 1024)  # MB/s
+            slowest_bps = throughput_data.get("slowest_bps", 0) / (1024 * 1024)  # MB/s
+            average_bps = throughput_data.get("average_bps", 0) / (1024 * 1024)  # MB/s
+
+            html_content += f"""
+    <div class="result-section">
+        <h2>{filename}</h2>
+
+        <div class="stats-grid">
+            <div class="stat-card">
+                <div class="stat-label">Total Requests</div>
+                <div class="stat-value">{total_requests:,}</div>
+            </div>
+            <div class="stat-card">
+                <div class="stat-label">Total Objects</div>
+                <div class="stat-value">{total_objects:,}</div>
+            </div>
+            <div class="stat-card">
+                <div class="stat-label">Total Errors</div>
+                <div class="stat-value">{total_errors}</div>
+            </div>
+            <div class="stat-card">
+                <div class="stat-label">Total Data</div>
+                <div class="stat-value">{total_bytes / (1024**3):.2f} GB</div>
+            </div>
+            <div class="stat-card">
+                <div class="stat-label">Concurrency</div>
+                <div class="stat-value">{concurrency}</div>
+            </div>
+            <div class="stat-card">
+                <div class="stat-label">Average Throughput</div>
+                <div class="stat-value">{average_bps:.1f} MB/s</div>
+            </div>
+            <div class="stat-card">
+                <div class="stat-label">Fastest Throughput</div>
+                <div class="stat-value">{fastest_bps:.1f} MB/s</div>
+            </div>
+            <div class="stat-card">
+                <div class="stat-label">Slowest Throughput</div>
+                <div class="stat-value">{slowest_bps:.1f} MB/s</div>
+            </div>
+        </div>
+"""
+
+            # Generate graphs
+            throughput_graph = generate_throughput_graph(
+                data, output_dir, filename_prefix
+            )
+            if throughput_graph:
+                rel_path = os.path.basename(throughput_graph)
+                html_content += (
+                    f'        <img src="{rel_path}" alt="Throughput Graph">\n'
+                )
+
+            ops_graph = generate_operation_stats_graph(
+                data, output_dir, filename_prefix
+            )
+            if ops_graph:
+                rel_path = os.path.basename(ops_graph)
+                html_content += (
+                    f'        <img src="{rel_path}" alt="Operations Statistics">\n'
+                )
+
+            # Add operations table if available
+            operations = data.get("operations", {})
+            if operations:
+                html_content += """
+        <h3>Operation Details</h3>
+        <table>
+            <tr>
+                <th>Operation</th>
+                <th>Throughput (ops/s)</th>
+                <th>Mean Latency (ms)</th>
+                <th>Min Latency (ms)</th>
+                <th>Max Latency (ms)</th>
+            </tr>
+"""
+                for op_type, op_data in operations.items():
+                    if op_type in ["DELETE", "GET", "PUT", "STAT"]:
+                        throughput = op_data.get("throughput", {}).get("obj_per_sec", 0)
+                        latency = op_data.get("latency", {})
+                        mean_lat = latency.get("mean", 0) / 1_000_000
+                        min_lat = latency.get("min", 0) / 1_000_000
+                        max_lat = latency.get("max", 0) / 1_000_000
+
+                        html_content += f"""
+            <tr>
+                <td>{op_type}</td>
+                <td>{throughput:.2f}</td>
+                <td>{mean_lat:.2f}</td>
+                <td>{min_lat:.2f}</td>
+                <td>{max_lat:.2f}</td>
+            </tr>
+"""
+                html_content += "        </table>\n"
+
+            html_content += "    </div>\n"
+
+        except Exception as e:
+            print(f"Error processing {json_file}: {e}")
+            continue
+
+    html_content += """
+</body>
+</html>
+"""
+
+    html_file = os.path.join(output_dir, "warp_benchmark_report.html")
+    with open(html_file, "w") as f:
+        f.write(html_content)
+
+    return html_file
+
+
+def main():
+    if len(sys.argv) > 1:
+        results_dir = sys.argv[1]
+    else:
+        # Default to workflows/minio/results
+        script_dir = Path(__file__).parent.absolute()
+        results_dir = script_dir.parent / "results"
+
+    results_dir = Path(results_dir)
+    if not results_dir.exists():
+        print(f"Results directory {results_dir} does not exist")
+        sys.exit(1)
+
+    # Find all JSON files
+    json_files = list(results_dir.glob("warp_benchmark_*.json"))
+
+    if not json_files:
+        print(f"No warp_benchmark_*.json files found in {results_dir}")
+        sys.exit(1)
+
+    print(f"Found {len(json_files)} result files")
+
+    # Generate HTML report with graphs
+    html_file = generate_html_report(json_files, results_dir)
+    print(f"Generated HTML report: {html_file}")
+
+    # Also generate individual graphs for latest result
+    latest_json = max(json_files, key=os.path.getctime)
+    data = parse_warp_json(latest_json)
+    filename_prefix = latest_json.stem
+
+    throughput_graph = generate_throughput_graph(data, results_dir, filename_prefix)
+    if throughput_graph:
+        print(f"Generated throughput graph: {throughput_graph}")
+
+    ops_graph = generate_operation_stats_graph(data, results_dir, filename_prefix)
+    if ops_graph:
+        print(f"Generated operations graph: {ops_graph}")
+
+
+if __name__ == "__main__":
+    main()
diff --git a/workflows/minio/scripts/run_benchmark_suite.sh b/workflows/minio/scripts/run_benchmark_suite.sh
new file mode 100755
index 00000000..6d71f48b
--- /dev/null
+++ b/workflows/minio/scripts/run_benchmark_suite.sh
@@ -0,0 +1,116 @@
+#!/bin/bash
+# Run comprehensive MinIO Warp benchmark suite
+
+MINIO_HOST="${1:-localhost:9000}"
+ACCESS_KEY="${2:-minioadmin}"
+SECRET_KEY="${3:-minioadmin}"
+TOTAL_DURATION="${4:-30m}"
+RESULTS_DIR="/tmp/warp-results"
+TIMESTAMP=$(date +%s)
+
+# Parse duration to seconds for calculation
+parse_duration_to_seconds() {
+    local duration="$1"
+    local value="${duration//[^0-9]/}"
+    local unit="${duration//[0-9]/}"
+
+    case "$unit" in
+        s) echo "$value" ;;
+        m) echo $((value * 60)) ;;
+        h) echo $((value * 3600)) ;;
+        *) echo "1800" ;;  # Default 30 minutes
+    esac
+}
+
+TOTAL_SECONDS=$(parse_duration_to_seconds "$TOTAL_DURATION")
+# Distribute time across 8 benchmark types (reserving some buffer)
+PER_TEST_SECONDS=$((TOTAL_SECONDS / 10))  # Divide by 10 to leave buffer
+if [ $PER_TEST_SECONDS -lt 30 ]; then
+    PER_TEST_SECONDS=30  # Minimum 30 seconds per test
+fi
+
+# Convert back to duration string
+if [ $PER_TEST_SECONDS -ge 3600 ]; then
+    PER_TEST_DURATION="$((PER_TEST_SECONDS / 3600))h"
+elif [ $PER_TEST_SECONDS -ge 60 ]; then
+    PER_TEST_DURATION="$((PER_TEST_SECONDS / 60))m"
+else
+    PER_TEST_DURATION="${PER_TEST_SECONDS}s"
+fi
+
+echo "🚀 MinIO Warp Comprehensive Benchmark Suite"
+echo "==========================================="
+echo "Target: $MINIO_HOST"
+echo "Total Duration: $TOTAL_DURATION ($TOTAL_SECONDS seconds)"
+echo "Per Test Duration: $PER_TEST_DURATION"
+echo "Results: $RESULTS_DIR"
+echo ""
+
+# Ensure results directory exists
+mkdir -p "$RESULTS_DIR"
+
+# Function to run a benchmark
+run_benchmark() {
+    local test_type=$1
+    local duration=$2
+    local concurrent=$3
+    local obj_size=$4
+
+    echo "Running $test_type benchmark..."
+    echo "  Duration: $duration, Concurrent: $concurrent, Size: $obj_size"
+
+    OUTPUT_FILE="${RESULTS_DIR}/warp_${test_type}_${TIMESTAMP}.json"
+
+    # Don't use --autoterm or --objects for duration-based tests
+    warp "$test_type" \
+        --host="$MINIO_HOST" \
+        --access-key="$ACCESS_KEY" \
+        --secret-key="$SECRET_KEY" \
+        --bucket="warp-bench-${test_type}" \
+        --duration="$duration" \
+        --concurrent="$concurrent" \
+        --obj.size="$obj_size" \
+        --noclear \
+        --json > "$OUTPUT_FILE" 2>&1
+
+    if [ $? -eq 0 ]; then
+        echo "✅ $test_type completed successfully"
+    else
+        echo "❌ $test_type failed"
+    fi
+    echo ""
+}
+
+# Run comprehensive test suite
+echo "📊 Starting Comprehensive Benchmark Suite"
+echo "-----------------------------------------"
+
+# 1. Mixed workload (simulates real-world usage)
+run_benchmark "mixed" "$PER_TEST_DURATION" "10" "1MB"
+
+# 2. GET performance (read-heavy workload)
+run_benchmark "get" "$PER_TEST_DURATION" "20" "1MB"
+
+# 3. PUT performance (write-heavy workload)
+run_benchmark "put" "$PER_TEST_DURATION" "10" "10MB"
+
+# 4. DELETE performance
+run_benchmark "delete" "$PER_TEST_DURATION" "5" "1MB"
+
+# 5. LIST operations (metadata operations)
+run_benchmark "list" "$PER_TEST_DURATION" "5" "1KB"
+
+# 6. Small object performance
+run_benchmark "mixed" "$PER_TEST_DURATION" "10" "1KB"
+
+# 7. Large object performance
+run_benchmark "mixed" "$PER_TEST_DURATION" "5" "100MB"
+
+# 8. High concurrency test
+run_benchmark "mixed" "$PER_TEST_DURATION" "50" "1MB"
+
+echo "==========================================="
+echo "✅ Benchmark Suite Complete!"
+echo ""
+echo "Results saved in: $RESULTS_DIR"
+echo "Run 'make minio-results' to generate analysis"
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 2/3] declared_hosts: add support for pre-existing infrastructure
  2025-09-02 23:53 ` [PATCH v3 2/3] declared_hosts: add support for pre-existing infrastructure Luis Chamberlain
@ 2025-09-03  9:02   ` Daniel Gomez
  0 siblings, 0 replies; 5+ messages in thread
From: Daniel Gomez @ 2025-09-03  9:02 UTC (permalink / raw)
  To: Luis Chamberlain, Chuck Lever, Daniel Gomez, kdevops
  Cc: hui81.qi, kundan.kumar

On 03/09/2025 01.53, Luis Chamberlain wrote:
> This adds support for using pre-existing infrastructure (bare metal servers,
> pre-provisioned VMs, cloud instances) that users already have SSH access to,
> bypassing the kdevops bringup process.
> 
> We borrow the DECLARE_* foo practice from the Linux kernel to ensure the
> user will declare the hosts they already have set up with:
> 
>   make DECLARE_HOSTS="foo bar" defconfig-foo
>   or
>   make DECLARE_HOSTS="foo bar" menuconfig
> 
> We just skip the data partition setup, at the role level. The user
> is encouraged to set DATA_PATH if they want something other than /data/
> to be expected to be used. The onus is on them to ensure that the
> DATA_PATH works for the user the the host is configured to ssh access
> to already.
> 
> Currently no workflows are fully supported with declared hosts.
> Each workflow requires individual review and testing to ensure proper
> operation with pre-existing infrastructure before being enabled.
> 
> Generated-by: Claude AI
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> ---

...

> diff --git a/kconfigs/Kconfig.declared_hosts b/kconfigs/Kconfig.declared_hosts
> new file mode 100644
> index 00000000..cfdae8fa
> --- /dev/null
> +++ b/kconfigs/Kconfig.declared_hosts
> @@ -0,0 +1,71 @@
> +# Configuration for declared hosts that skip bringup process
> +
> +config KDEVOPS_USE_DECLARED_HOSTS
> +	bool "Use declared hosts (skip bringup process)"
> +	select WORKFLOW_INFER_USER_AND_GROUP
> +	output yaml
> +	help
> +	  Enable this option to use pre-existing hosts that you have already
> +	  configured with SSH access. This is useful for:
> +
> +	  * Bare metal systems
> +	  * Pre-provisioned VMs or cloud instances
> +	  * Systems managed by other infrastructure tools
> +
> +	  When this option is enabled:
> +	  - SSH keys will not be generated (assumes you already have access)
> +	  - Bringup and teardown operations will be skipped
> +	  - User and group settings will be inferred from the target hosts
> +	  - You must provide the list of hosts in KDEVOPS_DECLARED_HOSTS
> +
> +	  This option automatically:
> +	  - Selects WORKFLOW_INFER_USER_AND_GROUP to detect the correct
> +	    user and group on the target systems
> +	  - Assumes SSH access is already configured
> +
> +if KDEVOPS_USE_DECLARED_HOSTS
> +
> +config KDEVOPS_DECLARED_HOSTS
> +	string "List of declared hosts"
> +	output yaml
> +	default "$(shell, echo ${DECLARED_HOSTS})"
> +	default ""
> +	help
> +	  Provide a list of hostnames or IP addresses for the pre-existing
> +	  systems you want to use. These hosts must already be accessible
> +	  via SSH with the appropriate keys configured.
> +
> +	  Format: Space or comma-separated list
> +	  Example: "host1 host2 host3" or "host1,host2,host3"
> +
> +	  These hosts will be used directly without any bringup process.
> +	  Make sure you have:
> +	  - SSH access configured
> +	  - Required packages installed
> +	  - Appropriate user permissions
> +
> +config KDEVOPS_DECLARED_HOSTS_PYTHON_INTERPRETER
> +	string "Python interpreter path on declared hosts"
> +	default "/usr/bin/python3"
> +	output yaml
> +	help
> +	  Specify the path to the Python interpreter on the declared hosts.
> +	  This is required for Ansible to function properly.
> +
> +	  Common values:
> +	  - /usr/bin/python3 (most modern systems)
> +	  - /usr/bin/python (older systems)
> +	  - /usr/local/bin/python3 (custom installations)

Isn't this redundant?

In the hosts template files, this patch does this:

ansible_python_interpreter = "{{ kdevops_declared_hosts_python_interpreter | default(kdevops_python_interpreter) }}"

I think, using the kdevops_python_interperter config/yaml directly should be
sufficient. Or what I am missing?

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-09-03  9:03 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-02 23:53 [PATCH v3 0/3] declared hosts support Luis Chamberlain
2025-09-02 23:53 ` [PATCH v3 1/3] gen_hosts: use kdevops_workflow_name directly for template selection Luis Chamberlain
2025-09-02 23:53 ` [PATCH v3 2/3] declared_hosts: add support for pre-existing infrastructure Luis Chamberlain
2025-09-03  9:02   ` Daniel Gomez
2025-09-02 23:53 ` [PATCH v3 3/3] minio: add MinIO Warp S3 benchmarking with declared hosts support Luis Chamberlain

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).