* [PATCH] build-linux: add workflow for repeated kernel builds
@ 2025-09-19 3:51 Luis Chamberlain
2025-09-19 8:29 ` Daniel Gomez
0 siblings, 1 reply; 3+ messages in thread
From: Luis Chamberlain @ 2025-09-19 3:51 UTC (permalink / raw)
To: Chuck Lever, Daniel Gomez, kdevops; +Cc: David Bueso, Luis Chamberlain
Add a new workflow that allows building the Linux kernel multiple times
to measure build time variations and performance. This is useful for
benchmarking build systems and compiler performance testing.
This goes in with monitoring support so we can do AB testing against
different filesystems.
Generated-by: Claude AI
Suggested-by: David Bueso <dave@stgolabs.net>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
Demo of visualization of results:
https://htmlpreview.github.io/?https://github.com/mcgrof/plot-build/blob/main/index.html
.gitignore | 1 +
defconfigs/build-linux | 12 +
defconfigs/build-linux-multifs | 39 +
defconfigs/build-linux-xfs-16k | 15 +
defconfigs/build-linux-xfs-32k | 15 +
defconfigs/build-linux-xfs-4k | 15 +
defconfigs/build-linux-xfs-64k | 15 +
defconfigs/build-linux-xfs-8k | 15 +
kconfigs/workflows/Kconfig | 20 +
playbooks/build_linux.yml | 30 +
playbooks/build_linux_results.yml | 69 ++
playbooks/roles/build_linux/defaults/main.yml | 25 +
.../tasks/install-deps/debian/main.yml | 23 +
.../build_linux/tasks/install-deps/main.yml | 12 +
.../tasks/install-deps/redhat/main.yml | 23 +
.../tasks/install-deps/suse/main.yml | 23 +
playbooks/roles/build_linux/tasks/main.yml | 244 ++++
playbooks/roles/build_qemu/defaults/main.yml | 2 +
playbooks/roles/gen_hosts/tasks/main.yml | 34 +
.../templates/workflows/build-linux.j2 | 178 +++
playbooks/roles/gen_nodes/defaults/main.yml | 1 +
playbooks/roles/gen_nodes/tasks/main.yml | 131 +++
workflows/Makefile | 4 +
workflows/build-linux/Kconfig | 255 +++++
workflows/build-linux/Kconfig.multifs | 199 ++++
workflows/build-linux/Makefile | 101 ++
workflows/build-linux/scripts/build_linux.py | 314 +++++
.../build-linux/scripts/combine_results.py | 98 ++
.../build-linux/scripts/generate_summaries.py | 113 ++
.../build-linux/scripts/visualize_results.py | 1015 +++++++++++++++++
workflows/linux/Kconfig | 1 +
31 files changed, 3042 insertions(+)
create mode 100644 defconfigs/build-linux
create mode 100644 defconfigs/build-linux-multifs
create mode 100644 defconfigs/build-linux-xfs-16k
create mode 100644 defconfigs/build-linux-xfs-32k
create mode 100644 defconfigs/build-linux-xfs-4k
create mode 100644 defconfigs/build-linux-xfs-64k
create mode 100644 defconfigs/build-linux-xfs-8k
create mode 100644 playbooks/build_linux.yml
create mode 100644 playbooks/build_linux_results.yml
create mode 100644 playbooks/roles/build_linux/defaults/main.yml
create mode 100644 playbooks/roles/build_linux/tasks/install-deps/debian/main.yml
create mode 100644 playbooks/roles/build_linux/tasks/install-deps/main.yml
create mode 100644 playbooks/roles/build_linux/tasks/install-deps/redhat/main.yml
create mode 100644 playbooks/roles/build_linux/tasks/install-deps/suse/main.yml
create mode 100644 playbooks/roles/build_linux/tasks/main.yml
create mode 100644 playbooks/roles/gen_hosts/templates/workflows/build-linux.j2
create mode 100644 workflows/build-linux/Kconfig
create mode 100644 workflows/build-linux/Kconfig.multifs
create mode 100644 workflows/build-linux/Makefile
create mode 100644 workflows/build-linux/scripts/build_linux.py
create mode 100644 workflows/build-linux/scripts/combine_results.py
create mode 100755 workflows/build-linux/scripts/generate_summaries.py
create mode 100755 workflows/build-linux/scripts/visualize_results.py
diff --git a/.gitignore b/.gitignore
index 720c94b6..faab0e49 100644
--- a/.gitignore
+++ b/.gitignore
@@ -73,6 +73,7 @@ workflows/mmtests/results/
tmp
workflows/fio-tests/results/
+workflows/build-linux/results/
playbooks/roles/linux-mirror/linux-mirror-systemd/*.service
playbooks/roles/linux-mirror/linux-mirror-systemd/*.timer
diff --git a/defconfigs/build-linux b/defconfigs/build-linux
new file mode 100644
index 00000000..3e6b7887
--- /dev/null
+++ b/defconfigs/build-linux
@@ -0,0 +1,12 @@
+# Workflow configuration
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_BUILD_LINUX=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX=y
+CONFIG_BUILD_LINUX_REPEAT_COUNT=100
+CONFIG_BUILD_LINUX_CLEAN_BETWEEN=y
+CONFIG_BUILD_LINUX_COLLECT_STATS=y
+CONFIG_BUILD_LINUX_STORAGE_ENABLE=n
+CONFIG_BUILD_LINUX_USE_LATEST_TAG=y
diff --git a/defconfigs/build-linux-multifs b/defconfigs/build-linux-multifs
new file mode 100644
index 00000000..9386aca4
--- /dev/null
+++ b/defconfigs/build-linux-multifs
@@ -0,0 +1,39 @@
+# Workflow configuration
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_BUILD_LINUX=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX=y
+CONFIG_BUILD_LINUX_REPEAT_COUNT=5
+CONFIG_BUILD_LINUX_CLEAN_BETWEEN=y
+CONFIG_BUILD_LINUX_COLLECT_STATS=y
+CONFIG_BUILD_LINUX_STORAGE_ENABLE=y
+CONFIG_BUILD_LINUX_USE_LATEST_TAG=y
+
+# Multi-filesystem testing
+CONFIG_BUILD_LINUX_ENABLE_MULTIFS_TESTING=y
+
+# Test multiple XFS configurations
+CONFIG_BUILD_LINUX_MULTIFS_TEST_XFS=y
+CONFIG_BUILD_LINUX_MULTIFS_XFS_4K_4KS=y
+CONFIG_BUILD_LINUX_MULTIFS_XFS_16K_4KS=y
+CONFIG_BUILD_LINUX_MULTIFS_XFS_32K_4KS=y
+CONFIG_BUILD_LINUX_MULTIFS_XFS_64K_4KS=y
+
+
+# Test ext4
+CONFIG_BUILD_LINUX_MULTIFS_TEST_EXT4=y
+CONFIG_BUILD_LINUX_MULTIFS_EXT4_4K=y
+
+# Test btrfs
+CONFIG_BUILD_LINUX_MULTIFS_TEST_BTRFS=y
+CONFIG_BUILD_LINUX_MULTIFS_BTRFS_DEFAULT=y
+
+# Auto-detect filesystem from node name
+CONFIG_BUILD_LINUX_MULTIFS_USE_NODE_FS=y
+
+CONFIG_ENABLE_MONITORING=y
+CONFIG_MONITOR_DEVELOPMENTAL_STATS=y
+CONFIG_MONITOR_FOLIO_MIGRATION=y
+CONFIG_MONITOR_MEMORY_FRAGMENTATION=y
diff --git a/defconfigs/build-linux-xfs-16k b/defconfigs/build-linux-xfs-16k
new file mode 100644
index 00000000..1f9168e8
--- /dev/null
+++ b/defconfigs/build-linux-xfs-16k
@@ -0,0 +1,15 @@
+# Workflow configuration
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_BUILD_LINUX=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX=y
+CONFIG_BUILD_LINUX_REPEAT_COUNT=10
+CONFIG_BUILD_LINUX_CLEAN_BETWEEN=y
+CONFIG_BUILD_LINUX_COLLECT_STATS=y
+CONFIG_BUILD_LINUX_STORAGE_ENABLE=y
+CONFIG_BUILD_LINUX_FSTYPE_XFS=y
+CONFIG_BUILD_LINUX_XFS_BLOCKSIZE_16K=y
+CONFIG_BUILD_LINUX_XFS_SECTORSIZE_4K=y
+CONFIG_BUILD_LINUX_USE_LATEST_TAG=y
diff --git a/defconfigs/build-linux-xfs-32k b/defconfigs/build-linux-xfs-32k
new file mode 100644
index 00000000..0e75296b
--- /dev/null
+++ b/defconfigs/build-linux-xfs-32k
@@ -0,0 +1,15 @@
+# Workflow configuration
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_BUILD_LINUX=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX=y
+CONFIG_BUILD_LINUX_REPEAT_COUNT=10
+CONFIG_BUILD_LINUX_CLEAN_BETWEEN=y
+CONFIG_BUILD_LINUX_COLLECT_STATS=y
+CONFIG_BUILD_LINUX_STORAGE_ENABLE=y
+CONFIG_BUILD_LINUX_FSTYPE_XFS=y
+CONFIG_BUILD_LINUX_XFS_BLOCKSIZE_32K=y
+CONFIG_BUILD_LINUX_XFS_SECTORSIZE_4K=y
+CONFIG_BUILD_LINUX_USE_LATEST_TAG=y
diff --git a/defconfigs/build-linux-xfs-4k b/defconfigs/build-linux-xfs-4k
new file mode 100644
index 00000000..73915bb7
--- /dev/null
+++ b/defconfigs/build-linux-xfs-4k
@@ -0,0 +1,15 @@
+# Workflow configuration
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_BUILD_LINUX=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX=y
+CONFIG_BUILD_LINUX_REPEAT_COUNT=10
+CONFIG_BUILD_LINUX_CLEAN_BETWEEN=y
+CONFIG_BUILD_LINUX_COLLECT_STATS=y
+CONFIG_BUILD_LINUX_STORAGE_ENABLE=y
+CONFIG_BUILD_LINUX_FSTYPE_XFS=y
+CONFIG_BUILD_LINUX_XFS_BLOCKSIZE_4K=y
+CONFIG_BUILD_LINUX_XFS_SECTORSIZE_4K=y
+CONFIG_BUILD_LINUX_USE_LATEST_TAG=y
diff --git a/defconfigs/build-linux-xfs-64k b/defconfigs/build-linux-xfs-64k
new file mode 100644
index 00000000..66eba603
--- /dev/null
+++ b/defconfigs/build-linux-xfs-64k
@@ -0,0 +1,15 @@
+# Workflow configuration
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_BUILD_LINUX=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX=y
+CONFIG_BUILD_LINUX_REPEAT_COUNT=10
+CONFIG_BUILD_LINUX_CLEAN_BETWEEN=y
+CONFIG_BUILD_LINUX_COLLECT_STATS=y
+CONFIG_BUILD_LINUX_STORAGE_ENABLE=y
+CONFIG_BUILD_LINUX_FSTYPE_XFS=y
+CONFIG_BUILD_LINUX_XFS_BLOCKSIZE_64K=y
+CONFIG_BUILD_LINUX_XFS_SECTORSIZE_64K=y
+CONFIG_BUILD_LINUX_USE_LATEST_TAG=y
diff --git a/defconfigs/build-linux-xfs-8k b/defconfigs/build-linux-xfs-8k
new file mode 100644
index 00000000..18a7a065
--- /dev/null
+++ b/defconfigs/build-linux-xfs-8k
@@ -0,0 +1,15 @@
+# Workflow configuration
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_BUILD_LINUX=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX=y
+CONFIG_BUILD_LINUX_REPEAT_COUNT=10
+CONFIG_BUILD_LINUX_CLEAN_BETWEEN=y
+CONFIG_BUILD_LINUX_COLLECT_STATS=y
+CONFIG_BUILD_LINUX_STORAGE_ENABLE=y
+CONFIG_BUILD_LINUX_FSTYPE_XFS=y
+CONFIG_BUILD_LINUX_XFS_BLOCKSIZE_8K=y
+CONFIG_BUILD_LINUX_XFS_SECTORSIZE_4K=y
+CONFIG_BUILD_LINUX_USE_LATEST_TAG=y
diff --git a/kconfigs/workflows/Kconfig b/kconfigs/workflows/Kconfig
index 1bd1dd56..1be04c9c 100644
--- a/kconfigs/workflows/Kconfig
+++ b/kconfigs/workflows/Kconfig
@@ -240,6 +240,14 @@ config KDEVOPS_WORKFLOW_DEDICATE_MINIO
This will dedicate your configuration to running only the
MinIO workflow for S3 storage benchmarking with Warp testing.
+config KDEVOPS_WORKFLOW_DEDICATE_BUILD_LINUX
+ bool "build-linux"
+ depends on !KDEVOPS_USE_DECLARED_HOSTS
+ select KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX
+ help
+ This will dedicate your configuration to running only the
+ build-linux workflow for repeated Linux kernel builds.
+
endchoice
config KDEVOPS_WORKFLOW_NAME
@@ -258,6 +266,7 @@ config KDEVOPS_WORKFLOW_NAME
default "fio-tests" if KDEVOPS_WORKFLOW_DEDICATE_FIO_TESTS
default "ai" if KDEVOPS_WORKFLOW_DEDICATE_AI
default "minio" if KDEVOPS_WORKFLOW_DEDICATE_MINIO
+ default "build-linux" if KDEVOPS_WORKFLOW_DEDICATE_BUILD_LINUX
endif
@@ -532,6 +541,17 @@ source "workflows/minio/Kconfig"
endmenu
endif # KDEVOPS_WORKFLOW_ENABLE_MINIO
+config KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX
+ bool
+ output yaml
+ default y if KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_BUILD_LINUX || KDEVOPS_WORKFLOW_DEDICATE_BUILD_LINUX
+
+if KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX
+menu "Configure Linux kernel build workflow"
+source "workflows/build-linux/Kconfig"
+endmenu
+endif # KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX
+
config KDEVOPS_WORKFLOW_ENABLE_SSD_STEADY_STATE
bool "Attain SSD steady state prior to tests"
output yaml
diff --git a/playbooks/build_linux.yml b/playbooks/build_linux.yml
new file mode 100644
index 00000000..c703669b
--- /dev/null
+++ b/playbooks/build_linux.yml
@@ -0,0 +1,30 @@
+---
+- name: Build Linux Kernel Multiple Times
+ hosts: baseline:dev
+
+ pre_tasks:
+ # Install bootlinux dependencies (for kernel building capabilities)
+ - ansible.builtin.import_tasks: roles/bootlinux/tasks/install-deps/main.yml
+ tags: ["install_deps"]
+
+ # Install monitoring dependencies if monitoring is enabled
+ - ansible.builtin.import_tasks: roles/monitoring/tasks/install-deps/main.yml
+ when:
+ - enable_monitoring|default(false)|bool
+ tags: ["monitoring", "install_deps"]
+
+ # Start monitoring services right before running builds
+ - ansible.builtin.import_tasks: roles/monitoring/tasks/monitor_run.yml
+ when:
+ - enable_monitoring|default(false)|bool
+ tags: ["monitoring", "monitor_run"]
+
+ roles:
+ - build_linux
+
+ post_tasks:
+ # Collect monitoring data after builds complete
+ - ansible.builtin.import_tasks: roles/monitoring/tasks/monitor_collect.yml
+ when:
+ - enable_monitoring|default(false)|bool
+ tags: ["monitoring", "monitor_collect"]
diff --git a/playbooks/build_linux_results.yml b/playbooks/build_linux_results.yml
new file mode 100644
index 00000000..c21cca96
--- /dev/null
+++ b/playbooks/build_linux_results.yml
@@ -0,0 +1,69 @@
+---
+- name: Collect Build Linux Results
+ hosts: all
+ vars:
+ build_linux_results_dir: "{{ build_linux_results_dir | default('workflows/build-linux/results') }}"
+
+ tasks:
+ - name: Create local results directory
+ ansible.builtin.file:
+ path: "{{ topdir_path }}/{{ build_linux_results_dir }}"
+ state: directory
+ mode: '0755'
+ delegate_to: localhost
+ run_once: true
+
+ - name: Check for build results on target
+ ansible.builtin.stat:
+ path: "{{ data_path }}/build-results"
+ register: results_dir
+
+ - name: List build result files
+ ansible.builtin.find:
+ paths: "{{ data_path }}/build-results"
+ patterns: "*"
+ file_type: file
+ register: result_files
+ when: results_dir.stat.exists
+
+ - name: Fetch build results
+ ansible.builtin.fetch:
+ src: "{{ item.path }}"
+ dest: "{{ topdir_path }}/{{ build_linux_results_dir }}/{{ ansible_hostname }}_{{ item.path | basename }}"
+ flat: true
+ loop: "{{ result_files.files }}"
+ when:
+ - results_dir.stat.exists
+ - result_files.files is defined
+
+ - name: Copy combine results script
+ ansible.builtin.copy:
+ src: "{{ playbook_dir }}/../workflows/build-linux/scripts/combine_results.py"
+ dest: "{{ topdir_path }}/combine_results.py"
+ mode: '0755'
+ delegate_to: localhost
+ run_once: true
+ when: result_files.matched > 0
+
+ - name: Generate combined report
+ ansible.builtin.command: |
+ python3 {{ topdir_path }}/combine_results.py {{ topdir_path }}/{{ build_linux_results_dir }}
+ delegate_to: localhost
+ run_once: true
+ register: combined_report
+ when: result_files.matched > 0
+
+ - name: Clean up temporary script
+ ansible.builtin.file:
+ path: "{{ topdir_path }}/combine_results.py"
+ state: absent
+ delegate_to: localhost
+ run_once: true
+ when: result_files.matched > 0
+
+ - name: Display combined report
+ ansible.builtin.debug:
+ msg: "{{ combined_report.stdout_lines }}"
+ when:
+ - combined_report is defined
+ - combined_report.stdout_lines is defined
diff --git a/playbooks/roles/build_linux/defaults/main.yml b/playbooks/roles/build_linux/defaults/main.yml
new file mode 100644
index 00000000..0dcfbf94
--- /dev/null
+++ b/playbooks/roles/build_linux/defaults/main.yml
@@ -0,0 +1,25 @@
+---
+# Defaults for build_linux role
+
+build_linux_repeat_count: 100
+build_linux_make_jobs: 0
+build_linux_target: "all"
+build_linux_clean_between: false
+build_linux_collect_stats: true
+build_linux_results_dir: "workflows/build-linux/results"
+build_linux_storage_enable: false
+build_linux_device: ""
+build_linux_use_latest_tag: false
+build_linux_allow_modifications: false
+shallow_clone: true
+clone_depth: 1
+
+# Build paths - for build-linux workflow, both source and build are in build filesystem
+linux_source_dir: "{{ data_path }}/build/linux-source"
+linux_build_dir: "{{ data_path }}/build/linux-build"
+
+# We enable building Linux locally, it'll be easier instead of re-inventing
+# all the kconfig for us to leverage the URL parsing. If you do not want
+# to build and installing Linux on the target nodes, you just skip the
+# make linux target.
+linux_git_url: "{{ bootlinux_tree | default('git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git') }}"
diff --git a/playbooks/roles/build_linux/tasks/install-deps/debian/main.yml b/playbooks/roles/build_linux/tasks/install-deps/debian/main.yml
new file mode 100644
index 00000000..1759e348
--- /dev/null
+++ b/playbooks/roles/build_linux/tasks/install-deps/debian/main.yml
@@ -0,0 +1,23 @@
+---
+# Dependencies for build-linux workflow timing and statistics
+- name: Install build-linux timing dependencies
+ become: true
+ become_method: sudo
+ ansible.builtin.apt:
+ name:
+ - time # Required for timing statistics
+ state: present
+ when:
+ - kdevops_workflow_enable_build_linux|default(false)|bool
+
+# Dependencies for build-linux workflow visualization
+- name: Install build-linux visualization dependencies
+ become: true
+ become_method: sudo
+ ansible.builtin.apt:
+ name:
+ - python3-matplotlib
+ - python3-numpy
+ state: present
+ when:
+ - kdevops_workflow_enable_build_linux|default(false)|bool
diff --git a/playbooks/roles/build_linux/tasks/install-deps/main.yml b/playbooks/roles/build_linux/tasks/install-deps/main.yml
new file mode 100644
index 00000000..dd33fd0d
--- /dev/null
+++ b/playbooks/roles/build_linux/tasks/install-deps/main.yml
@@ -0,0 +1,12 @@
+---
+- name: Install dependencies to build and test Linux for Debian and Ubuntu systems
+ ansible.builtin.import_tasks: debian/main.yml
+ when: ansible_facts['os_family']|lower == 'debian'
+
+- name: Install dependencies to build and test Linux for Red Hat and Fedora systems
+ ansible.builtin.import_tasks: redhat/main.yml
+ when: ansible_facts['os_family']|lower == 'redhat'
+
+- name: Install dependencies to build and test Linux for SUSE systems
+ ansible.builtin.import_tasks: suse/main.yml
+ when: ansible_facts['os_family']|lower == 'suse'
diff --git a/playbooks/roles/build_linux/tasks/install-deps/redhat/main.yml b/playbooks/roles/build_linux/tasks/install-deps/redhat/main.yml
new file mode 100644
index 00000000..57c08498
--- /dev/null
+++ b/playbooks/roles/build_linux/tasks/install-deps/redhat/main.yml
@@ -0,0 +1,23 @@
+---
+# Dependencies for build-linux workflow timing and statistics
+- name: Install build-linux timing dependencies
+ become: true
+ become_method: sudo
+ ansible.builtin.dnf:
+ name:
+ - time # Required for timing statistics
+ state: present
+ when:
+ - kdevops_workflow_enable_build_linux|default(false)|bool
+
+# Dependencies for build-linux workflow visualization
+- name: Install build-linux visualization dependencies
+ become: true
+ become_method: sudo
+ ansible.builtin.dnf:
+ name:
+ - python3-matplotlib
+ - python3-numpy
+ state: present
+ when:
+ - kdevops_workflow_enable_build_linux|default(false)|bool
diff --git a/playbooks/roles/build_linux/tasks/install-deps/suse/main.yml b/playbooks/roles/build_linux/tasks/install-deps/suse/main.yml
new file mode 100644
index 00000000..0be2118c
--- /dev/null
+++ b/playbooks/roles/build_linux/tasks/install-deps/suse/main.yml
@@ -0,0 +1,23 @@
+---
+# Dependencies for build-linux workflow timing and statistics
+- name: Install build-linux timing dependencies
+ become: true
+ become_method: sudo
+ community.general.zypper:
+ name:
+ - time # Required for timing statistics
+ state: present
+ when:
+ - kdevops_workflow_enable_build_linux|default(false)|bool
+
+# Dependencies for build-linux workflow visualization
+- name: Install build-linux visualization dependencies
+ become: true
+ become_method: sudo
+ community.general.zypper:
+ name:
+ - python3-matplotlib
+ - python3-numpy
+ state: present
+ when:
+ - kdevops_workflow_enable_build_linux|default(false)|bool
diff --git a/playbooks/roles/build_linux/tasks/main.yml b/playbooks/roles/build_linux/tasks/main.yml
new file mode 100644
index 00000000..a8edda0f
--- /dev/null
+++ b/playbooks/roles/build_linux/tasks/main.yml
@@ -0,0 +1,244 @@
+---
+# Main tasks for build_linux role
+
+- name: Install build-linux dependencies
+ ansible.builtin.import_tasks: install-deps/main.yml
+ tags: ["install_deps"]
+
+- name: Set initial filesystem type
+ set_fact:
+ node_fstype: "{{ build_linux_fstype | default('xfs') }}"
+
+- name: Detect filesystem configuration from node name
+ when:
+ - build_linux_enable_multifs_testing|default(false)|bool
+ - build_linux_multifs_use_node_fs|default(false)|bool
+ block:
+ - name: Set filesystem type based on node name
+ set_fact:
+ node_fstype: >-
+ {%- if 'xfs' in ansible_hostname -%}xfs
+ {%- elif 'ext4' in ansible_hostname -%}ext4
+ {%- elif 'btrfs' in ansible_hostname -%}btrfs
+ {%- elif 'tmpfs' in ansible_hostname -%}tmpfs
+ {%- else -%}{{ build_linux_fstype|default('xfs') }}{%- endif -%}
+
+ - name: Set XFS block and sector sizes from node name
+ when: node_fstype == 'xfs'
+ set_fact:
+ node_xfs_blocksize: >-
+ {%- if 'xfs-4k' in ansible_hostname -%}4096
+ {%- elif 'xfs-8k' in ansible_hostname -%}8192
+ {%- elif 'xfs-16k' in ansible_hostname -%}16384
+ {%- elif 'xfs-32k' in ansible_hostname -%}32768
+ {%- elif 'xfs-64k' in ansible_hostname -%}65536
+ {%- else -%}4096{%- endif -%}
+ node_xfs_sectorsize: "4096"
+
+- name: Setup dedicated build filesystem
+ when:
+ - build_linux_storage_enable
+ block:
+ - name: Check if device is already mounted
+ become: yes
+ ansible.builtin.shell: |
+ # Check both the symlink and the actual device
+ DEVICE="{{ build_linux_device }}"
+ REAL_DEVICE=$(readlink -f "$DEVICE" 2>/dev/null || echo "$DEVICE")
+ mount | grep -E "($DEVICE|$REAL_DEVICE)"
+ register: mount_check
+ failed_when: false
+ changed_when: false
+
+ - name: Unmount device if currently mounted
+ become: yes
+ ansible.builtin.mount:
+ path: "{{ data_path }}/build"
+ state: unmounted
+ when: mount_check.rc == 0
+
+ - name: Create XFS filesystem with custom block and sector sizes
+ become: yes
+ ansible.builtin.command:
+ argv:
+ - mkfs.xfs
+ - -f
+ - -b
+ - "size={{ node_xfs_blocksize|default(4096) }}"
+ - -s
+ - "size={{ node_xfs_sectorsize|default(4096) }}"
+ - "{{ build_linux_device }}"
+ when: node_fstype == 'xfs'
+
+ - name: Create non-XFS filesystem on device
+ become: yes
+ ansible.builtin.filesystem:
+ fstype: "{{ node_fstype }}"
+ dev: "{{ build_linux_device }}"
+ force: yes
+ when:
+ - node_fstype != 'tmpfs'
+ - node_fstype != 'xfs'
+
+ - name: Mount build filesystem
+ become: yes
+ ansible.builtin.mount:
+ path: "{{ data_path }}/build"
+ src: "{{ build_linux_device if node_fstype != 'tmpfs' else 'tmpfs' }}"
+ fstype: "{{ node_fstype }}"
+ opts: "{{ 'size=32G' if node_fstype == 'tmpfs' else 'defaults' }}"
+ state: mounted
+
+ - name: Set ownership of build directory
+ become: yes
+ ansible.builtin.file:
+ path: "{{ data_path }}/build"
+ owner: "{{ ansible_user_id }}"
+ group: "{{ ansible_user_gid }}"
+ mode: '0755'
+
+- name: Create build directories
+ become: yes
+ ansible.builtin.file:
+ path: "{{ item }}"
+ state: directory
+ mode: '0755'
+ owner: "{{ ansible_user_id }}"
+ group: "{{ ansible_user_gid }}"
+ loop:
+ - "{{ linux_source_dir }}"
+ - "{{ linux_build_dir }}"
+ - "{{ data_path }}/build-results"
+
+- name: Check if Linux source exists
+ ansible.builtin.stat:
+ path: "{{ linux_source_dir }}/.git"
+ register: linux_git_exists
+
+- name: Check if Linux source is valid
+ ansible.builtin.command: git rev-parse --git-dir
+ args:
+ chdir: "{{ linux_source_dir }}"
+ register: git_repo_check
+ failed_when: false
+ changed_when: false
+ when: linux_git_exists.stat.exists
+
+- name: Report git repository status
+ ansible.builtin.debug:
+ msg: >-
+ Git repository:
+ {% if linux_git_exists.stat.exists and git_repo_check.rc == 0 %}Using existing valid repository
+ {% elif linux_git_exists.stat.exists and git_repo_check.rc != 0 %}Found corrupted repository, will remove and re-clone
+ {% else %}Repository missing, will clone
+ {% endif %}
+
+- name: Remove invalid git repository
+ ansible.builtin.file:
+ path: "{{ linux_source_dir }}"
+ state: absent
+ when:
+ - linux_git_exists.stat.exists
+ - git_repo_check.rc != 0
+
+- name: Clone Linux kernel source (shallow)
+ ansible.builtin.git:
+ repo: "{{ linux_git_url }}"
+ dest: "{{ linux_source_dir }}"
+ depth: "{{ clone_depth }}"
+ single_branch: yes
+ when:
+ - not linux_git_exists.stat.exists or git_repo_check.rc != 0
+ - shallow_clone
+
+- name: Clone Linux kernel source (full)
+ ansible.builtin.git:
+ repo: "{{ linux_git_url }}"
+ dest: "{{ linux_source_dir }}"
+ when:
+ - not linux_git_exists.stat.exists or git_repo_check.rc != 0
+ - not shallow_clone
+ retries: 3
+ delay: 10
+ register: git_result
+ until: not git_result.failed
+
+- name: Check if linux source directory is writable
+ ansible.builtin.stat:
+ path: "{{ linux_source_dir }}"
+ register: linux_source_stat
+
+- name: Fetch recent tags for latest tag detection
+ ansible.builtin.command: git fetch --depth=10 --tags
+ args:
+ chdir: "{{ linux_source_dir }}"
+ when:
+ - build_linux_use_latest_tag
+ - linux_source_stat.stat.writeable
+
+- name: Ensure /data directory is writable
+ ansible.builtin.file:
+ path: "{{ data_path }}"
+ state: directory
+ mode: '0755'
+ owner: "{{ ansible_user | default('kdevops') }}"
+ group: "{{ ansible_user | default('kdevops') }}"
+ become: yes
+ become_user: root
+
+- name: Copy build script
+ ansible.builtin.copy:
+ src: "{{ playbook_dir }}/../workflows/build-linux/scripts/build_linux.py"
+ dest: "{{ data_path }}/build_linux.py"
+ mode: '0755'
+
+- name: Run Linux kernel builds
+ ansible.builtin.command: |
+ python3 {{ data_path }}/build_linux.py \
+ --source-dir {{ linux_source_dir }} \
+ --build-dir {{ linux_build_dir }} \
+ --results-dir {{ data_path }}/build-results \
+ --count {{ build_linux_repeat_count }} \
+ --jobs {{ build_linux_make_jobs }} \
+ --target {{ build_linux_target }} \
+ {% if build_linux_clean_between %}--clean-between{% endif %} \
+ {% if build_linux_collect_stats %}--collect-stats{% endif %} \
+ {% if build_linux_use_latest_tag and linux_source_stat.stat.writeable %}--use-latest{% else %}--tag {{ build_linux_custom_tag | default('master') }}{% endif %}
+ register: build_result
+ async: 36000 # 10 hours timeout
+ poll: 60 # Check every minute
+
+- name: Display build output
+ ansible.builtin.debug:
+ msg: "{{ build_result.stdout_lines }}"
+ when: build_result is defined
+
+- name: Check for build results
+ ansible.builtin.stat:
+ path: "{{ data_path }}/build-results/summary_{{ ansible_hostname }}.json"
+ register: summary_file
+
+- name: Read and display summary
+ when: summary_file.stat.exists
+ block:
+ - name: Read summary file
+ ansible.builtin.slurp:
+ src: "{{ data_path }}/build-results/summary_{{ ansible_hostname }}.json"
+ register: summary_content
+
+ - name: Parse summary
+ ansible.builtin.set_fact:
+ build_summary: "{{ summary_content.content | b64decode | from_json }}"
+
+ - name: Display summary statistics
+ ansible.builtin.debug:
+ msg: |
+ Build Statistics Summary
+ ========================
+ Total builds: {{ build_summary.total_builds }}
+ Successful builds: {{ build_summary.successful_builds }}
+ Failed builds: {{ build_summary.failed_builds }}
+ Average build time: {{ build_summary.statistics.average | round(2) }} seconds
+ Median build time: {{ build_summary.statistics.median | round(2) }} seconds
+ Min/Max: {{ build_summary.statistics.min | round(2) }}/{{ build_summary.statistics.max | round(2) }} seconds
+ Total time: {{ build_summary.statistics.total_hours | round(2) }} hours
diff --git a/playbooks/roles/build_qemu/defaults/main.yml b/playbooks/roles/build_qemu/defaults/main.yml
index 1cb90d0f..cb3aebe8 100644
--- a/playbooks/roles/build_qemu/defaults/main.yml
+++ b/playbooks/roles/build_qemu/defaults/main.yml
@@ -12,3 +12,5 @@ qemu_git: "https://github.com/qemu/qemu.git"
qemu_version: "v7.2.0-rc4"
qemu_build_dir: "{{ qemu_data }}/build"
qemu_target: "x86_64-softmmu"
+build_linux_shallow_clone: true
+build_linux_clone_depth: 1
diff --git a/playbooks/roles/gen_hosts/tasks/main.yml b/playbooks/roles/gen_hosts/tasks/main.yml
index 83829bd6..c4599e4e 100644
--- a/playbooks/roles/gen_hosts/tasks/main.yml
+++ b/playbooks/roles/gen_hosts/tasks/main.yml
@@ -192,6 +192,25 @@
- ai_enable_multifs_testing|default(false)|bool
- guestfs_nodes is defined
+- name: Load Build-linux nodes configuration for multi-filesystem setup
+ include_vars:
+ file: "{{ topdir_path }}/{{ kdevops_nodes }}"
+ name: build_linux_nodes
+ when:
+ - kdevops_workflows_dedicated_workflow
+ - kdevops_workflow_enable_build_linux|default(false)|bool
+ - build_linux_enable_multifs_testing|default(false)|bool
+ - ansible_hosts_template.stat.exists
+
+- name: Extract Build-linux node names for multi-filesystem setup
+ set_fact:
+ build_linux_enabled_section_types: "{{ build_linux_nodes.guestfs_nodes | map(attribute='name') | list }}"
+ when:
+ - kdevops_workflows_dedicated_workflow
+ - kdevops_workflow_enable_build_linux|default(false)|bool
+ - build_linux_enable_multifs_testing|default(false)|bool
+ - build_linux_nodes is defined
+
- name: Generate the Ansible inventory file
tags: ["hosts"]
vars:
@@ -236,6 +255,21 @@
- ansible_hosts_template.stat.exists
- not kdevops_use_declared_hosts|default(false)|bool
+- name: Generate the Ansible hosts file for a dedicated Build-linux setup
+ tags: ['hosts']
+ ansible.builtin.template:
+ src: "{{ kdevops_hosts_template }}"
+ dest: "{{ ansible_cfg_inventory }}"
+ force: true
+ trim_blocks: True
+ lstrip_blocks: True
+ mode: '0644'
+ when:
+ - kdevops_workflows_dedicated_workflow
+ - kdevops_workflow_enable_build_linux|default(false)|bool
+ - ansible_hosts_template.stat.exists
+ - not kdevops_use_declared_hosts|default(false)|bool
+
- name: Verify if final host file exists
ansible.builtin.stat:
path: "{{ ansible_cfg_inventory }}"
diff --git a/playbooks/roles/gen_hosts/templates/workflows/build-linux.j2 b/playbooks/roles/gen_hosts/templates/workflows/build-linux.j2
new file mode 100644
index 00000000..6ada0c0a
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/workflows/build-linux.j2
@@ -0,0 +1,178 @@
+{# Workflow template for Build-linux #}
+{% if build_linux_enable_multifs_testing|default(false)|bool %}
+{# Multi-filesystem Build-linux configuration #}
+[all]
+localhost ansible_connection=local
+{% for config in build_linux_enabled_section_types|default([]) %}
+{{ config }}
+{% endfor %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{% for config in build_linux_enabled_section_types|default([]) %}
+{% if '-dev' not in config %}
+{{ config }}
+{% endif %}
+{% endfor %}
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{% for config in build_linux_enabled_section_types|default([]) %}
+{% if '-dev' in config %}
+{{ config }}
+{% endif %}
+{% endfor %}
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+[build-linux]
+{% for config in build_linux_enabled_section_types|default([]) %}
+{{ config }}
+{% endfor %}
+
+[build-linux:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{# Create filesystem-specific groups #}
+{% if build_linux_multifs_xfs_4k_4ks|default(false)|bool %}
+[build-linux-xfs-4k-4ks]
+{{ kdevops_host_prefix }}-build-linux-xfs-4k-4ks
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-build-linux-xfs-4k-4ks-dev
+{% endif %}
+
+[build-linux-xfs-4k-4ks:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+{% if build_linux_multifs_xfs_8k_4ks|default(false)|bool %}
+[build-linux-xfs-8k-4ks]
+{{ kdevops_host_prefix }}-build-linux-xfs-8k-4ks
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-build-linux-xfs-8k-4ks-dev
+{% endif %}
+
+[build-linux-xfs-8k-4ks:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+{% if build_linux_multifs_xfs_16k_4ks|default(false)|bool %}
+[build-linux-xfs-16k-4ks]
+{{ kdevops_host_prefix }}-build-linux-xfs-16k-4ks
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-build-linux-xfs-16k-4ks-dev
+{% endif %}
+
+[build-linux-xfs-16k-4ks:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+{% if build_linux_multifs_xfs_32k_4ks|default(false)|bool %}
+[build-linux-xfs-32k-4ks]
+{{ kdevops_host_prefix }}-build-linux-xfs-32k-4ks
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-build-linux-xfs-32k-4ks-dev
+{% endif %}
+
+[build-linux-xfs-32k-4ks:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+{% if build_linux_multifs_xfs_64k_4ks|default(false)|bool %}
+[build-linux-xfs-64k-4ks]
+{{ kdevops_host_prefix }}-build-linux-xfs-64k-4ks
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-build-linux-xfs-64k-4ks-dev
+{% endif %}
+
+[build-linux-xfs-64k-4ks:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+{% if build_linux_multifs_ext4_4k|default(false)|bool %}
+[build-linux-ext4-4k]
+{{ kdevops_host_prefix }}-build-linux-ext4-4k
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-build-linux-ext4-4k-dev
+{% endif %}
+
+[build-linux-ext4-4k:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+{% if build_linux_multifs_ext4_16k_bigalloc|default(false)|bool %}
+[build-linux-ext4-16k-bigalloc]
+{{ kdevops_host_prefix }}-build-linux-ext4-16k-bigalloc
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-build-linux-ext4-16k-bigalloc-dev
+{% endif %}
+
+[build-linux-ext4-16k-bigalloc:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+build_linux_ext4_mkfs_opts = "-F -O bigalloc -C 16384"
+{% endif %}
+
+{% if build_linux_multifs_btrfs_default|default(false)|bool %}
+[build-linux-btrfs]
+{{ kdevops_host_prefix }}-build-linux-btrfs
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-build-linux-btrfs-dev
+{% endif %}
+
+[build-linux-btrfs:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+{% if build_linux_multifs_tmpfs_default|default(false)|bool %}
+[build-linux-tmpfs]
+{{ kdevops_host_prefix }}-build-linux-tmpfs
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-build-linux-tmpfs-dev
+{% endif %}
+
+[build-linux-tmpfs:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
+
+{% else %}
+{# Standard single-filesystem Build-linux configuration #}
+[all]
+localhost ansible_connection=local
+{{ kdevops_host_prefix }}-build-linux
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-build-linux-dev
+{% endif %}
+
+[all:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+[baseline]
+{{ kdevops_host_prefix }}-build-linux
+
+[baseline:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% if kdevops_baseline_and_dev %}
+[dev]
+{{ kdevops_host_prefix }}-build-linux-dev
+
+[dev:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+
+{% endif %}
+[build-linux]
+{{ kdevops_host_prefix }}-build-linux
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-build-linux-dev
+{% endif %}
+
+[build-linux:vars]
+ansible_python_interpreter = "{{ kdevops_python_interpreter }}"
+{% endif %}
diff --git a/playbooks/roles/gen_nodes/defaults/main.yml b/playbooks/roles/gen_nodes/defaults/main.yml
index f88ef659..c2004eb4 100644
--- a/playbooks/roles/gen_nodes/defaults/main.yml
+++ b/playbooks/roles/gen_nodes/defaults/main.yml
@@ -25,6 +25,7 @@ libvirt_provider: false
libvirt_extra_drive_format: "qcow2"
libvirt_vcpus_count: 8
libvirt_mem_mb: 4096
+build_linux_tmpfs_mem_gb: 4096
gdb_port_conflict: false
libvirt_enable_gdb: false
libvirt_gdb_baseport: 1234
diff --git a/playbooks/roles/gen_nodes/tasks/main.yml b/playbooks/roles/gen_nodes/tasks/main.yml
index c3397a7d..716c8ec0 100644
--- a/playbooks/roles/gen_nodes/tasks/main.yml
+++ b/playbooks/roles/gen_nodes/tasks/main.yml
@@ -906,6 +906,128 @@
- kdevops_baseline_and_dev
- not minio_enable_multifs_testing|default(false)|bool
+# Build-linux workflow nodes
+
+# Multi-filesystem Build-linux configurations
+- name: Collect enabled Build-linux multi-filesystem configurations
+ vars:
+ xfs_configs: >-
+ {{
+ [] +
+ (['xfs-4k-4ks'] if build_linux_multifs_xfs_4k_4ks|default(false)|bool else []) +
+ (['xfs-8k-4ks'] if build_linux_multifs_xfs_8k_4ks|default(false)|bool else []) +
+ (['xfs-16k-4ks'] if build_linux_multifs_xfs_16k_4ks|default(false)|bool else []) +
+ (['xfs-32k-4ks'] if build_linux_multifs_xfs_32k_4ks|default(false)|bool else []) +
+ (['xfs-64k-4ks'] if build_linux_multifs_xfs_64k_4ks|default(false)|bool else [])
+ }}
+ ext4_configs: >-
+ {{
+ [] +
+ (['ext4-4k'] if build_linux_multifs_ext4_4k|default(false)|bool else []) +
+ (['ext4-16k-bigalloc'] if build_linux_multifs_ext4_16k_bigalloc|default(false)|bool else [])
+ }}
+ btrfs_configs: >-
+ {{
+ [] +
+ (['btrfs'] if build_linux_multifs_btrfs_default|default(false)|bool else [])
+ }}
+ tmpfs_configs: >-
+ {{
+ [] +
+ (['tmpfs'] if build_linux_multifs_tmpfs_default|default(false)|bool else [])
+ }}
+ set_fact:
+ build_linux_multifs_enabled_configs: "{{ (xfs_configs + ext4_configs + btrfs_configs + tmpfs_configs) | unique }}"
+ when:
+ - kdevops_workflows_dedicated_workflow
+ - kdevops_workflow_enable_build_linux|default(false)|bool
+ - build_linux_enable_multifs_testing|default(false)|bool
+ - ansible_nodes_template.stat.exists
+
+- name: Create Build-linux nodes for each filesystem configuration (no dev)
+ vars:
+ filesystem_nodes: "{{ [kdevops_host_prefix + '-build-linux-'] | product(build_linux_multifs_enabled_configs | default([])) | map('join') | list }}"
+ set_fact:
+ build_linux_enabled_section_types: "{{ filesystem_nodes }}"
+ when:
+ - kdevops_workflows_dedicated_workflow
+ - kdevops_workflow_enable_build_linux|default(false)|bool
+ - build_linux_enable_multifs_testing|default(false)|bool
+ - ansible_nodes_template.stat.exists
+ - not kdevops_baseline_and_dev
+ - build_linux_multifs_enabled_configs is defined
+ - build_linux_multifs_enabled_configs | length > 0
+
+- name: Create Build-linux nodes for each filesystem configuration with dev hosts
+ vars:
+ filesystem_nodes: "{{ [kdevops_host_prefix + '-build-linux-'] | product(build_linux_multifs_enabled_configs | default([])) | map('join') | list }}"
+ set_fact:
+ build_linux_enabled_section_types: "{{ filesystem_nodes | product(['', '-dev']) | map('join') | list }}"
+ when:
+ - kdevops_workflows_dedicated_workflow
+ - kdevops_workflow_enable_build_linux|default(false)|bool
+ - build_linux_enable_multifs_testing|default(false)|bool
+ - ansible_nodes_template.stat.exists
+ - kdevops_baseline_and_dev
+ - build_linux_multifs_enabled_configs is defined
+ - build_linux_multifs_enabled_configs | length > 0
+
+- name: Generate the Build-linux multi-filesystem kdevops nodes file using {{ kdevops_nodes_template }} as jinja2 source template
+ tags: ['hosts']
+ vars:
+ node_template: "{{ kdevops_nodes_template | basename }}"
+ nodes: "{{ build_linux_enabled_section_types | regex_replace('\\[') | regex_replace('\\]') | replace(\"'\", '') | split(', ') }}"
+ all_generic_nodes: "{{ build_linux_enabled_section_types }}"
+ ansible.builtin.template:
+ src: "{{ node_template }}"
+ dest: "{{ topdir_path }}/{{ kdevops_nodes }}"
+ force: true
+ mode: '0644'
+ when:
+ - kdevops_workflows_dedicated_workflow
+ - kdevops_workflow_enable_build_linux|default(false)|bool
+ - build_linux_enable_multifs_testing|default(false)|bool
+ - ansible_nodes_template.stat.exists
+ - build_linux_enabled_section_types is defined
+ - build_linux_enabled_section_types | length > 0
+
+# Standard Build-linux single filesystem nodes
+- name: Generate the Build-linux kdevops nodes file using {{ kdevops_nodes_template }} as jinja2 source template
+ tags: ['hosts']
+ vars:
+ node_template: "{{ kdevops_nodes_template | basename }}"
+ nodes: "{{ [kdevops_host_prefix + '-build-linux'] }}"
+ all_generic_nodes: "{{ [kdevops_host_prefix + '-build-linux'] }}"
+ ansible.builtin.template:
+ src: "{{ node_template }}"
+ dest: "{{ topdir_path }}/{{ kdevops_nodes }}"
+ force: true
+ mode: '0644'
+ when:
+ - kdevops_workflows_dedicated_workflow
+ - kdevops_workflow_enable_build_linux|default(false)|bool
+ - ansible_nodes_template.stat.exists
+ - not kdevops_baseline_and_dev
+ - not build_linux_enable_multifs_testing|default(false)|bool
+
+- name: Generate the Build-linux kdevops nodes file with dev hosts using {{ kdevops_nodes_template }} as jinja2 source template
+ tags: ['hosts']
+ vars:
+ node_template: "{{ kdevops_nodes_template | basename }}"
+ nodes: "{{ [kdevops_host_prefix + '-build-linux', kdevops_host_prefix + '-build-linux-dev'] }}"
+ all_generic_nodes: "{{ [kdevops_host_prefix + '-build-linux', kdevops_host_prefix + '-build-linux-dev'] }}"
+ ansible.builtin.template:
+ src: "{{ node_template }}"
+ dest: "{{ topdir_path }}/{{ kdevops_nodes }}"
+ force: true
+ mode: '0644'
+ when:
+ - kdevops_workflows_dedicated_workflow
+ - kdevops_workflow_enable_build_linux|default(false)|bool
+ - ansible_nodes_template.stat.exists
+ - kdevops_baseline_and_dev
+ - not build_linux_enable_multifs_testing|default(false)|bool
+
- name: Get the control host's timezone
ansible.builtin.command: "timedatectl show -p Timezone --value"
register: kdevops_host_timezone
@@ -992,6 +1114,15 @@
vars:
hostname: "{{ item.name }}"
guestidx: "{{ idx }}"
+ # For build-linux tmpfs nodes, we need significantly more memory
+ # to hold the kernel source in RAM. Default is 4GB which is insufficient.
+ # We need at least 16GB for a shallow clone of Linux kernel in tmpfs.
+ # The memory amount is configurable via BUILD_LINUX_TMPFS_MEM_GB.
+ libvirt_mem_mb: >-
+ {{
+ (build_linux_tmpfs_mem_gb|default(32)|int * 1024) if ('build-linux-tmpfs' in item.name and kdevops_workflow_enable_build_linux|default(false)|bool)
+ else libvirt_mem_mb
+ }}
ansible.builtin.template:
src: "guestfs_{{ libvirt_machine_type_string }}.j2.xml"
dest: "{{ topdir_path }}/guestfs/{{ hostname }}/{{ hostname }}.xml"
diff --git a/workflows/Makefile b/workflows/Makefile
index ee90227e..05c75a2d 100644
--- a/workflows/Makefile
+++ b/workflows/Makefile
@@ -74,6 +74,10 @@ ifeq (y,$(CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO))
include workflows/minio/Makefile
endif # CONFIG_KDEVOPS_WORKFLOW_ENABLE_MINIO == y
+ifeq (y,$(CONFIG_KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX))
+include workflows/build-linux/Makefile
+endif # CONFIG_KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX == y
+
ANSIBLE_EXTRA_ARGS += $(WORKFLOW_ARGS)
ANSIBLE_EXTRA_ARGS_SEPARATED += $(WORKFLOW_ARGS_SEPARATED)
ANSIBLE_EXTRA_ARGS_DIRECT += $(WORKFLOW_ARGS_DIRECT)
diff --git a/workflows/build-linux/Kconfig b/workflows/build-linux/Kconfig
new file mode 100644
index 00000000..d2cdb71e
--- /dev/null
+++ b/workflows/build-linux/Kconfig
@@ -0,0 +1,255 @@
+if KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX
+
+config BUILD_LINUX_RESULTS_DIR
+ string "Results directory"
+ output yaml
+ default "workflows/build-linux/results"
+ help
+ Directory where build results and statistics will be stored.
+
+config BUILD_LINUX_REPEAT_COUNT
+ int "Number of times to build Linux"
+ output yaml
+ default 100
+ help
+ Specify how many times to build the Linux kernel. This is useful for
+ performance testing and measuring build time variations. The default
+ is 100 builds.
+
+config BUILD_LINUX_MAKE_JOBS
+ int "Number of parallel make jobs"
+ output yaml
+ default 0
+ help
+ Number of parallel jobs for make. If set to 0, will use
+ number of CPUs + 1.
+
+config BUILD_LINUX_TARGET
+ string "Kernel build target"
+ output yaml
+ default "all"
+ help
+ The make target to build. Common options:
+ - all: Build everything (default)
+ - vmlinux: Build just the kernel image
+ - modules: Build just the modules
+ - bzImage: Build compressed kernel image (x86)
+
+config BUILD_LINUX_CLEAN_BETWEEN
+ bool "Clean build tree between builds"
+ output yaml
+ default y
+ help
+ If enabled, clean the build directory between each build iteration.
+ This ensures each build starts from scratch. For out-of-tree builds,
+ this removes all build artifacts but preserves results. For in-tree
+ builds, uses 'git clean -f -x -d'.
+
+config BUILD_LINUX_COLLECT_STATS
+ bool "Collect detailed build statistics"
+ output yaml
+ default y
+ help
+ Collect detailed timing statistics for each build including
+ CPU usage, memory usage, and build phase timings.
+
+config BUILD_LINUX_STORAGE_ENABLE
+ bool "Enable dedicated build filesystem"
+ output yaml
+ default n
+ help
+ Configure a dedicated filesystem for build output.
+ When enabled, the kernel will be built on a dedicated
+ filesystem which can be on a different storage device
+ for performance testing.
+
+if BUILD_LINUX_STORAGE_ENABLE
+
+config BUILD_LINUX_DEVICE
+ string "Device to use for build storage"
+ output yaml
+ default "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
+ default "/dev/disk/by-id/virtio-kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
+ default "/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
+ default "/dev/nvme2n1" if TERRAFORM_AWS_INSTANCE_M5AD_2XLARGE
+ default "/dev/nvme2n1" if TERRAFORM_AWS_INSTANCE_M5AD_4XLARGE
+ default "/dev/nvme1n1" if TERRAFORM_GCE
+ default "/dev/sdd" if TERRAFORM_AZURE
+ default TERRAFORM_OCI_SPARSE_VOLUME_DEVICE_FILE_NAME if TERRAFORM_OCI
+ help
+ The device to use for build storage. This device will be
+ formatted and mounted to store kernel build outputs.
+
+choice
+ prompt "Build storage filesystem"
+ default BUILD_LINUX_FSTYPE_XFS
+
+config BUILD_LINUX_FSTYPE_XFS
+ bool "XFS"
+ help
+ Use XFS filesystem for build storage. XFS provides excellent
+ performance for large files and supports various block sizes
+ for testing large block size (LBS) configurations.
+
+config BUILD_LINUX_FSTYPE_BTRFS
+ bool "Btrfs"
+ help
+ Use Btrfs filesystem for build storage.
+
+config BUILD_LINUX_FSTYPE_EXT4
+ bool "ext4"
+ help
+ Use ext4 filesystem for build storage.
+
+config BUILD_LINUX_FSTYPE_TMPFS
+ bool "tmpfs (RAM)"
+ help
+ Use tmpfs (RAM) for build storage. Fast but requires lots of RAM.
+
+endchoice
+
+config BUILD_LINUX_FSTYPE
+ string
+ output yaml
+ default "xfs" if BUILD_LINUX_FSTYPE_XFS
+ default "btrfs" if BUILD_LINUX_FSTYPE_BTRFS
+ default "ext4" if BUILD_LINUX_FSTYPE_EXT4
+ default "tmpfs" if BUILD_LINUX_FSTYPE_TMPFS
+
+if BUILD_LINUX_FSTYPE_XFS
+
+choice
+ prompt "XFS block size configuration"
+ default BUILD_LINUX_XFS_BLOCKSIZE_4K
+
+config BUILD_LINUX_XFS_BLOCKSIZE_4K
+ bool "4K block size (default)"
+ help
+ Use 4K (4096 bytes) block size. This is the default and most
+ compatible configuration.
+
+config BUILD_LINUX_XFS_BLOCKSIZE_8K
+ bool "8K block size"
+ help
+ Use 8K (8192 bytes) block size for improved performance with
+ larger I/O operations.
+
+config BUILD_LINUX_XFS_BLOCKSIZE_16K
+ bool "16K block size (LBS)"
+ help
+ Use 16K (16384 bytes) block size. This is a large block size
+ configuration that may require kernel LBS support.
+
+config BUILD_LINUX_XFS_BLOCKSIZE_32K
+ bool "32K block size (LBS)"
+ help
+ Use 32K (32768 bytes) block size. This is a large block size
+ configuration that requires kernel LBS support.
+
+config BUILD_LINUX_XFS_BLOCKSIZE_64K
+ bool "64K block size (LBS)"
+ help
+ Use 64K (65536 bytes) block size. This is the maximum XFS block
+ size and requires kernel LBS support.
+
+endchoice
+
+config BUILD_LINUX_XFS_BLOCKSIZE
+ int
+ output yaml
+ default 4096 if BUILD_LINUX_XFS_BLOCKSIZE_4K
+ default 8192 if BUILD_LINUX_XFS_BLOCKSIZE_8K
+ default 16384 if BUILD_LINUX_XFS_BLOCKSIZE_16K
+ default 32768 if BUILD_LINUX_XFS_BLOCKSIZE_32K
+ default 65536 if BUILD_LINUX_XFS_BLOCKSIZE_64K
+
+choice
+ prompt "XFS sector size"
+ default BUILD_LINUX_XFS_SECTORSIZE_4K
+
+config BUILD_LINUX_XFS_SECTORSIZE_4K
+ bool "4K sector size (default)"
+ help
+ Use 4K (4096 bytes) sector size. This is the standard
+ configuration for most modern drives.
+
+config BUILD_LINUX_XFS_SECTORSIZE_512
+ bool "512 byte sector size"
+ depends on BUILD_LINUX_XFS_BLOCKSIZE_4K
+ help
+ Use legacy 512 byte sector size. Only available with 4K block size.
+
+config BUILD_LINUX_XFS_SECTORSIZE_8K
+ bool "8K sector size"
+ depends on BUILD_LINUX_XFS_BLOCKSIZE_8K || BUILD_LINUX_XFS_BLOCKSIZE_16K || BUILD_LINUX_XFS_BLOCKSIZE_32K || BUILD_LINUX_XFS_BLOCKSIZE_64K
+ help
+ Use 8K (8192 bytes) sector size. Requires block size >= 8K.
+
+config BUILD_LINUX_XFS_SECTORSIZE_16K
+ bool "16K sector size (LBS)"
+ depends on BUILD_LINUX_XFS_BLOCKSIZE_16K || BUILD_LINUX_XFS_BLOCKSIZE_32K || BUILD_LINUX_XFS_BLOCKSIZE_64K
+ help
+ Use 16K (16384 bytes) sector size. Requires block size >= 16K
+ and kernel LBS support.
+
+config BUILD_LINUX_XFS_SECTORSIZE_32K
+ bool "32K sector size (LBS)"
+ depends on BUILD_LINUX_XFS_BLOCKSIZE_32K || BUILD_LINUX_XFS_BLOCKSIZE_64K
+ help
+ Use 32K (32768 bytes) sector size. Requires block size >= 32K
+ and kernel LBS support.
+
+config BUILD_LINUX_XFS_SECTORSIZE_64K
+ bool "64K sector size (LBS)"
+ depends on BUILD_LINUX_XFS_BLOCKSIZE_64K
+ help
+ Use 64K (65536 bytes) sector size. Requires block size = 64K
+ and kernel LBS support.
+
+endchoice
+
+config BUILD_LINUX_XFS_SECTORSIZE
+ int
+ output yaml
+ default 512 if BUILD_LINUX_XFS_SECTORSIZE_512
+ default 4096 if BUILD_LINUX_XFS_SECTORSIZE_4K
+ default 8192 if BUILD_LINUX_XFS_SECTORSIZE_8K
+ default 16384 if BUILD_LINUX_XFS_SECTORSIZE_16K
+ default 32768 if BUILD_LINUX_XFS_SECTORSIZE_32K
+ default 65536 if BUILD_LINUX_XFS_SECTORSIZE_64K
+
+endif # BUILD_LINUX_FSTYPE_XFS
+
+endif # BUILD_LINUX_STORAGE_ENABLE
+
+config BUILD_LINUX_USE_LATEST_TAG
+ bool "Build latest Linus tag"
+ output yaml
+ default y
+ help
+ Automatically detect and build the latest tag from Linus' tree.
+ This will find the most recent v6.x tag (excluding -rc tags).
+
+config BUILD_LINUX_CUSTOM_TAG
+ string "Custom tag or branch to build"
+ output yaml
+ depends on !BUILD_LINUX_USE_LATEST_TAG
+ default "master"
+ help
+ Specify a custom git tag or branch to build instead of the
+ latest Linus tag.
+
+config BUILD_LINUX_ALLOW_MODIFICATIONS
+ bool "Allow kernel source modifications"
+ output yaml
+ default n
+ help
+ Enable this to allow modifications to the kernel source tree.
+ When disabled, the kernel source is kept pristine.
+
+# Multi-filesystem configuration when not skipping bringup
+if !KDEVOPS_USE_DECLARED_HOSTS && BUILD_LINUX_STORAGE_ENABLE
+source "workflows/build-linux/Kconfig.multifs"
+endif
+
+endif # KDEVOPS_WORKFLOW_ENABLE_BUILD_LINUX
diff --git a/workflows/build-linux/Kconfig.multifs b/workflows/build-linux/Kconfig.multifs
new file mode 100644
index 00000000..dafff1de
--- /dev/null
+++ b/workflows/build-linux/Kconfig.multifs
@@ -0,0 +1,199 @@
+menu "Multi-filesystem testing configuration"
+
+config BUILD_LINUX_ENABLE_MULTIFS_TESTING
+ bool "Enable multi-filesystem testing"
+ default n
+ output yaml
+ help
+ Enable testing kernel builds across multiple filesystem
+ configurations. This allows comparing build performance
+ characteristics between different filesystems and their
+ configurations.
+
+ When enabled, the build workflow will run on multiple nodes,
+ each with a different filesystem configuration, allowing for
+ detailed performance analysis across different storage backends.
+
+if BUILD_LINUX_ENABLE_MULTIFS_TESTING
+
+config BUILD_LINUX_MULTIFS_TEST_XFS
+ bool "Test XFS configurations"
+ default y
+ output yaml
+ help
+ Enable testing kernel builds on XFS filesystem with different
+ block size configurations.
+
+if BUILD_LINUX_MULTIFS_TEST_XFS
+
+menu "XFS configuration profiles"
+
+config BUILD_LINUX_MULTIFS_XFS_4K_4KS
+ bool "XFS 4k block size - 4k sector size"
+ default y
+ output yaml
+ help
+ Test kernel builds on XFS with 4k filesystem block size
+ and 4k sector size. This is the most common configuration
+ and provides good performance for most workloads.
+
+config BUILD_LINUX_MULTIFS_XFS_8K_4KS
+ bool "XFS 8k block size - 4k sector size"
+ default n
+ output yaml
+ help
+ Test kernel builds on XFS with 8k filesystem block size
+ and 4k sector size. Slightly larger block size for improved
+ I/O efficiency.
+
+config BUILD_LINUX_MULTIFS_XFS_16K_4KS
+ bool "XFS 16k block size - 4k sector size"
+ default n
+ output yaml
+ help
+ Test kernel builds on XFS with 16k filesystem block size
+ and 4k sector size. Larger block sizes can improve performance
+ for large file operations common in kernel builds.
+
+config BUILD_LINUX_MULTIFS_XFS_32K_4KS
+ bool "XFS 32k block size - 4k sector size"
+ default n
+ output yaml
+ help
+ Test kernel builds on XFS with 32k filesystem block size
+ and 4k sector size. Even larger block sizes can provide
+ benefits for builds with many large object files.
+
+config BUILD_LINUX_MULTIFS_XFS_64K_4KS
+ bool "XFS 64k block size - 4k sector size"
+ default n
+ output yaml
+ help
+ Test kernel builds on XFS with 64k filesystem block size
+ and 4k sector size. Maximum supported block size for XFS,
+ testing extreme configurations for kernel builds.
+
+endmenu
+
+endif # BUILD_LINUX_MULTIFS_TEST_XFS
+
+config BUILD_LINUX_MULTIFS_TEST_EXT4
+ bool "Test ext4 configurations"
+ default y
+ output yaml
+ help
+ Enable testing kernel builds on ext4 filesystem with different
+ configurations including bigalloc options.
+
+if BUILD_LINUX_MULTIFS_TEST_EXT4
+
+menu "ext4 configuration profiles"
+
+config BUILD_LINUX_MULTIFS_EXT4_4K
+ bool "ext4 4k block size"
+ default y
+ output yaml
+ help
+ Test kernel builds on ext4 with standard 4k block size.
+ This is the default ext4 configuration.
+
+config BUILD_LINUX_MULTIFS_EXT4_16K_BIGALLOC
+ bool "ext4 16k bigalloc"
+ default n
+ output yaml
+ help
+ Test kernel builds on ext4 with 16k bigalloc enabled.
+ Bigalloc reduces metadata overhead and can improve
+ performance for large file workloads like kernel builds.
+
+endmenu
+
+endif # BUILD_LINUX_MULTIFS_TEST_EXT4
+
+config BUILD_LINUX_MULTIFS_TEST_BTRFS
+ bool "Test btrfs configurations"
+ default y
+ output yaml
+ help
+ Enable testing kernel builds on btrfs filesystem with
+ common default configuration profile.
+
+if BUILD_LINUX_MULTIFS_TEST_BTRFS
+
+menu "btrfs configuration profiles"
+
+config BUILD_LINUX_MULTIFS_BTRFS_DEFAULT
+ bool "btrfs default profile"
+ default y
+ output yaml
+ help
+ Test kernel builds on btrfs with default configuration.
+ This includes modern defaults with free-space-tree and
+ no-holes features enabled.
+
+endmenu
+
+endif # BUILD_LINUX_MULTIFS_TEST_BTRFS
+
+config BUILD_LINUX_MULTIFS_TEST_TMPFS
+ bool "Test tmpfs (RAM filesystem)"
+ default n
+ output yaml
+ help
+ Enable testing kernel builds on tmpfs (RAM-based filesystem).
+ This provides the fastest possible build times but requires
+ sufficient RAM to hold the entire kernel build.
+
+if BUILD_LINUX_MULTIFS_TEST_TMPFS
+
+menu "tmpfs configuration profiles"
+
+config BUILD_LINUX_MULTIFS_TMPFS_DEFAULT
+ bool "tmpfs default profile"
+ default y
+ output yaml
+ help
+ Test kernel builds on tmpfs with default configuration.
+ This will use RAM for the build directory, providing
+ maximum performance if sufficient memory is available.
+
+endmenu
+
+config BUILD_LINUX_TMPFS_MEM_GB
+ int "Memory in GB for tmpfs build nodes"
+ default 32
+ range 16 128
+ output yaml
+ help
+ Amount of memory in gigabytes to allocate for tmpfs build nodes.
+ Building the Linux kernel in tmpfs requires significant memory:
+ - Minimum 16GB for a shallow clone build
+ - Recommended 32GB for comfortable operation
+ - 64GB or more for full clone builds
+
+ This memory is only allocated to nodes with tmpfs in their name
+ when the build-linux workflow is enabled.
+
+endif # BUILD_LINUX_MULTIFS_TEST_TMPFS
+
+config BUILD_LINUX_MULTIFS_RESULTS_DIR
+ string "Multi-filesystem results directory"
+ output yaml
+ default "/data/build-linux-multifs-benchmark"
+ help
+ Directory where multi-filesystem test results and logs will be stored.
+ Each filesystem configuration will have its own subdirectory.
+
+config BUILD_LINUX_MULTIFS_USE_NODE_FS
+ bool "Automatically detect filesystem type from node name"
+ default y
+ output yaml
+ help
+ When enabled, the filesystem type for build storage will be
+ automatically determined based on the node's configuration name.
+ For example, nodes named *-xfs-* will use XFS, *-ext4-* will
+ use ext4, *-btrfs-* will use Btrfs, and *-tmpfs-* will use tmpfs.
+
+endif # BUILD_LINUX_ENABLE_MULTIFS_TESTING
+
+endmenu
diff --git a/workflows/build-linux/Makefile b/workflows/build-linux/Makefile
new file mode 100644
index 00000000..d19c3854
--- /dev/null
+++ b/workflows/build-linux/Makefile
@@ -0,0 +1,101 @@
+PHONY += build-linux build-linux-baseline build-linux-dev
+PHONY += build-linux-results build-linux-visualize build-linux-help-menu
+PHONY += monitor-results monitor-kill
+
+# Main build-linux workflow targets
+build-linux: $(KDEVOPS_NODES) $(ANSIBLE_INVENTORY_FILE)
+ $(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+ -i hosts \
+ playbooks/build_linux.yml \
+ -f 10 \
+ --extra-vars=@$(KDEVOPS_EXTRA_VARS) \
+ $(LIMIT_HOSTS)
+ $(Q)echo "Build workflow completed. Collecting results..."
+ $(Q)$(MAKE) build-linux-results
+ $(Q)echo "Generating visualization..."
+ $(Q)$(MAKE) build-linux-visualize
+
+build-linux-baseline:
+ $(Q)$(MAKE) build-linux HOSTS="baseline"
+
+build-linux-dev:
+ $(Q)$(MAKE) build-linux HOSTS="dev"
+
+# Results collection target
+build-linux-results:
+ $(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+ -i hosts \
+ playbooks/build_linux_results.yml \
+ --extra-vars=@$(KDEVOPS_EXTRA_VARS) \
+ $(LIMIT_HOSTS)
+
+# Visualize collected results with graphs and HTML report
+build-linux-visualize:
+ $(Q)if [ ! -d workflows/build-linux/results ] || [ -z "$$(ls -A workflows/build-linux/results 2>/dev/null)" ]; then \
+ echo "Error: No results found in workflows/build-linux/results/"; \
+ echo ""; \
+ echo "Did you run these commands first?"; \
+ echo " 1. make build-linux (to run the builds)"; \
+ echo " 2. make build-linux-results (to collect results from nodes)"; \
+ echo ""; \
+ echo "To collect results now, run: make build-linux-results"; \
+ exit 1; \
+ fi
+ $(Q)# Generate summary files if they don't exist
+ $(Q)if [ -z "$$(ls workflows/build-linux/results/*_summary_*.json 2>/dev/null)" ]; then \
+ echo "Generating summary files from build_times data..."; \
+ python3 workflows/build-linux/scripts/generate_summaries.py \
+ workflows/build-linux/results/ || \
+ echo "Warning: Failed to generate summary files"; \
+ fi
+ $(Q)echo "Loading results..."
+ $(Q)python3 workflows/build-linux/scripts/visualize_results.py \
+ workflows/build-linux/results/ || \
+ (echo ""; \
+ echo "Error: Failed to generate visualizations."; \
+ echo ""; \
+ echo "This workflow requires matplotlib for visualization."; \
+ echo "Please install the dependencies using:"; \
+ echo " make install-deps"; \
+ echo ""; \
+ echo "Or manually install matplotlib:"; \
+ echo " Debian/Ubuntu: sudo apt-get install python3-matplotlib python3-numpy"; \
+ echo " RHEL/Fedora: sudo dnf install python3-matplotlib python3-numpy"; \
+ echo " SUSE: sudo zypper install python3-matplotlib python3-numpy"; \
+ exit 1)
+
+monitor-results: $(KDEVOPS_EXTRA_VARS)
+ $(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+ playbooks/monitor-results.yml \
+ --extra-vars=@$(KDEVOPS_EXTRA_VARS)
+
+monitor-kill: $(KDEVOPS_EXTRA_VARS)
+ $(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+ -i hosts \
+ playbooks/monitor-kill.yml \
+ --extra-vars=@$(KDEVOPS_EXTRA_VARS) \
+ $(LIMIT_HOSTS)
+
+build-linux-help-menu:
+ @echo "Build Linux workflow targets:"
+ @echo ""
+ @echo "Main targets:"
+ @echo " build-linux - Build Linux kernel multiple times, collect results, and visualize"
+ @echo " build-linux-baseline - Build on baseline nodes only (includes results + visualization)"
+ @echo " build-linux-dev - Build on dev nodes only (includes results + visualization)"
+ @echo ""
+ @echo "Results collection (also run automatically by build-linux):"
+ @echo " build-linux-results - Collect and analyze build statistics (standalone)"
+ @echo " build-linux-visualize - Generate HTML report with performance graphs (standalone)"
+ @echo " monitor-results - Collect monitoring data (requires CONFIG_ENABLE_MONITORING)"
+ @echo " monitor-kill - Kill all monitoring processes and clean up data"
+ @echo ""
+ @echo "Configuration:"
+ @echo " Repeat count: $(CONFIG_BUILD_LINUX_REPEAT_COUNT) builds"
+ @echo " Target: $(CONFIG_BUILD_LINUX_TARGET)"
+ @echo " Clean between: $(CONFIG_BUILD_LINUX_CLEAN_BETWEEN)"
+ @echo ""
+
+HELP_TARGETS += build-linux-help-menu
+
+.PHONY: $(PHONY)
diff --git a/workflows/build-linux/scripts/build_linux.py b/workflows/build-linux/scripts/build_linux.py
new file mode 100644
index 00000000..cf9e1ee3
--- /dev/null
+++ b/workflows/build-linux/scripts/build_linux.py
@@ -0,0 +1,314 @@
+#!/usr/bin/env python3
+"""
+Build Linux kernel multiple times and collect statistics.
+
+This script builds the Linux kernel multiple times to measure build
+performance and collect statistics. It runs as a regular user and
+does not require root privileges.
+"""
+
+import os
+import sys
+import time
+import json
+import argparse
+import subprocess
+import statistics
+from pathlib import Path
+from datetime import datetime
+
+
+class LinuxBuilder:
+ def __init__(self, args):
+ self.args = args
+ self.results = []
+ self.build_dir = Path(args.build_dir)
+ self.source_dir = Path(args.source_dir)
+ self.results_dir = Path(args.results_dir)
+
+ # Create results directory
+ self.results_dir.mkdir(parents=True, exist_ok=True)
+
+ # Determine number of jobs
+ if args.jobs == 0:
+ self.jobs = os.cpu_count() + 1
+ else:
+ self.jobs = args.jobs
+
+ def run_command(self, cmd, cwd=None, capture=True):
+ """Run a command and optionally capture output."""
+ if capture:
+ result = subprocess.run(
+ cmd, cwd=cwd, shell=True, capture_output=True, text=True
+ )
+ return result
+ else:
+ result = subprocess.run(cmd, cwd=cwd, shell=True)
+ return result
+
+ def get_latest_tag(self):
+ """Get the latest stable kernel tag from the git repository."""
+ cmd = "git tag --list 'v6.*' | grep -v -- '-rc' | sort -V | tail -1"
+ result = self.run_command(cmd, cwd=self.source_dir)
+ if result.returncode == 0:
+ return result.stdout.strip()
+ else:
+ print(f"Failed to get latest tag: {result.stderr}")
+ return None
+
+ def is_git_writable(self):
+ """Check if the git repository is writable."""
+ git_dir = self.source_dir / ".git"
+ if not git_dir.exists():
+ return False
+
+ # Try to access the git index file
+ index_file = git_dir / "index"
+ try:
+ # Try to touch the index file to check writability
+ test_result = self.run_command(
+ "test -w .git/index || test -w .git", cwd=self.source_dir
+ )
+ return test_result.returncode == 0
+ except:
+ return False
+
+ def checkout_tag(self, tag):
+ """Checkout a specific git tag or branch."""
+ print(f"Checking out {tag}...")
+ cmd = f"git checkout {tag}"
+ result = self.run_command(cmd, cwd=self.source_dir)
+ if result.returncode != 0:
+ print(f"Failed to checkout {tag}: {result.stderr}")
+ return False
+ return True
+
+ def configure_kernel(self):
+ """Configure the kernel with defconfig."""
+ print("Configuring kernel...")
+
+ # Ensure build directory exists
+ self.build_dir.mkdir(parents=True, exist_ok=True)
+
+ # For out-of-tree builds, specify source directory
+ if self.build_dir != self.source_dir:
+ # Use absolute paths to avoid any confusion
+ cmd = f"make -C {self.source_dir.absolute()} O={self.build_dir.absolute()} defconfig"
+ # Run from the build directory
+ result = self.run_command(cmd, cwd=self.build_dir)
+ else:
+ cmd = "make defconfig"
+ result = self.run_command(cmd, cwd=self.source_dir)
+
+ if result.returncode != 0:
+ print(f"Failed to configure kernel: {result.stderr}")
+ return False
+ return True
+
+ def clean_build(self):
+ """Clean the build directory."""
+ if self.args.clean_between:
+ print("Cleaning build directory...")
+ # For out-of-tree builds, just remove everything in build dir
+ # but keep the directory itself
+ if self.build_dir != self.source_dir:
+ cmd = "rm -rf *"
+ self.run_command(cmd, cwd=self.build_dir, capture=False)
+ else:
+ # For in-tree builds, use git clean
+ cmd = "git clean -f -x -d"
+ self.run_command(cmd, cwd=self.source_dir, capture=False)
+
+ def build_kernel(self, iteration):
+ """Build the kernel and measure time."""
+ print(f"Starting build {iteration} of {self.args.count}...")
+
+ # Clean if requested
+ self.clean_build()
+
+ # Record start time
+ start_time = time.time()
+ start_datetime = datetime.now()
+
+ # Build command - for out-of-tree builds, specify source dir
+ if self.build_dir != self.source_dir:
+ cmd = f"make -C {self.source_dir} O={self.build_dir} -j{self.jobs} {self.args.target}"
+ else:
+ cmd = f"make -j{self.jobs} {self.args.target}"
+
+ if self.args.collect_stats:
+ cmd = f"/usr/bin/time -v {cmd}"
+
+ # Log file for this build - store in results dir, not build dir
+ log_file = self.results_dir / f"build_{iteration}.log"
+
+ # Run the build
+ with open(log_file, "w") as f:
+ result = subprocess.run(cmd, shell=True, stdout=f, stderr=subprocess.STDOUT)
+
+ # Record end time
+ end_time = time.time()
+ end_datetime = datetime.now()
+ duration = end_time - start_time
+
+ # Store result
+ build_result = {
+ "iteration": iteration,
+ "start_time": start_datetime.isoformat(),
+ "end_time": end_datetime.isoformat(),
+ "duration": duration,
+ "exit_code": result.returncode,
+ "success": result.returncode == 0,
+ }
+
+ self.results.append(build_result)
+
+ if result.returncode != 0:
+ print(f" Build {iteration} failed with exit code {result.returncode}")
+ else:
+ print(f" Build {iteration} completed in {duration:.2f} seconds")
+
+ return result.returncode == 0
+
+ def save_results(self):
+ """Save results to JSON and generate summary."""
+ # Save raw results
+ results_file = self.results_dir / f"build_times_{os.uname().nodename}.json"
+ with open(results_file, "w") as f:
+ json.dump(self.results, f, indent=2)
+
+ # Generate summary
+ successful_builds = [r for r in self.results if r["success"]]
+ if successful_builds:
+ durations = [r["duration"] for r in successful_builds]
+
+ summary = {
+ "hostname": os.uname().nodename,
+ "total_builds": self.args.count,
+ "successful_builds": len(successful_builds),
+ "failed_builds": len(self.results) - len(successful_builds),
+ "build_target": self.args.target,
+ "make_jobs": self.jobs,
+ "clean_between": self.args.clean_between,
+ "statistics": {
+ "average": statistics.mean(durations),
+ "median": statistics.median(durations),
+ "min": min(durations),
+ "max": max(durations),
+ "total_time": sum(durations),
+ "total_hours": sum(durations) / 3600,
+ },
+ }
+
+ if len(durations) > 1:
+ summary["statistics"]["stddev"] = statistics.stdev(durations)
+
+ # Save summary
+ summary_file = self.results_dir / f"summary_{os.uname().nodename}.json"
+ with open(summary_file, "w") as f:
+ json.dump(summary, f, indent=2)
+
+ # Print summary
+ print("\nBuild Statistics Summary")
+ print("========================")
+ print(f"Total builds: {summary['total_builds']}")
+ print(f"Successful builds: {summary['successful_builds']}")
+ print(f"Failed builds: {summary['failed_builds']}")
+ print(f"Build target: {summary['build_target']}")
+ print(f"Make jobs: {summary['make_jobs']}")
+ print()
+ print(f"Average build time: {summary['statistics']['average']:.2f} seconds")
+ print(f"Median build time: {summary['statistics']['median']:.2f} seconds")
+ print(f"Minimum build time: {summary['statistics']['min']:.2f} seconds")
+ print(f"Maximum build time: {summary['statistics']['max']:.2f} seconds")
+ print(
+ f"Total time: {summary['statistics']['total_time']:.2f} seconds "
+ f"({summary['statistics']['total_hours']:.2f} hours)"
+ )
+ if "stddev" in summary["statistics"]:
+ print(
+ f"Standard deviation: {summary['statistics']['stddev']:.2f} seconds"
+ )
+
+ def run(self):
+ """Run the build workflow."""
+ # Determine which tag to build
+ if self.args.use_latest:
+ tag = self.get_latest_tag()
+ if not tag:
+ print("Failed to determine latest tag")
+ return 1
+ print(f"Using latest stable tag: {tag}")
+ else:
+ tag = self.args.tag
+ print(f"Using custom tag: {tag}")
+
+ # Checkout the tag (skip if git repo is read-only)
+ if self.is_git_writable():
+ if not self.checkout_tag(tag):
+ return 1
+ else:
+ print(
+ f"Skipping git checkout - repository is read-only (9P mount detected)"
+ )
+
+ # Configure kernel if needed
+ config_file = self.build_dir / ".config"
+ if not config_file.exists():
+ # Check if there's a config in the source directory we can copy (for read-only mounts)
+ source_config = self.source_dir / ".config"
+ if not self.is_git_writable() and source_config.exists():
+ print("Copying configuration from read-only source directory...")
+ import shutil
+
+ shutil.copy2(source_config, config_file)
+ else:
+ if not self.configure_kernel():
+ return 1
+
+ # Run builds
+ for i in range(1, self.args.count + 1):
+ self.build_kernel(i)
+
+ # Brief pause between builds
+ if i < self.args.count:
+ time.sleep(1)
+
+ # Save results
+ self.save_results()
+
+ print(f"\nCompleted {self.args.count} builds")
+ print(f"Results saved to {self.results_dir}")
+
+ return 0
+
+
+def main():
+ parser = argparse.ArgumentParser(description="Build Linux kernel multiple times")
+ parser.add_argument("--source-dir", required=True, help="Linux source directory")
+ parser.add_argument("--build-dir", required=True, help="Build output directory")
+ parser.add_argument("--results-dir", required=True, help="Results directory")
+ parser.add_argument("--count", type=int, default=100, help="Number of builds")
+ parser.add_argument(
+ "--jobs", type=int, default=0, help="Number of make jobs (0=auto)"
+ )
+ parser.add_argument("--target", default="all", help="Make target to build")
+ parser.add_argument(
+ "--clean-between", action="store_true", help="Clean between builds"
+ )
+ parser.add_argument(
+ "--collect-stats", action="store_true", help="Collect detailed stats"
+ )
+ parser.add_argument(
+ "--use-latest", action="store_true", help="Use latest stable tag"
+ )
+ parser.add_argument("--tag", default="master", help="Git tag/branch to build")
+
+ args = parser.parse_args()
+
+ builder = LinuxBuilder(args)
+ return builder.run()
+
+
+if __name__ == "__main__":
+ sys.exit(main())
diff --git a/workflows/build-linux/scripts/combine_results.py b/workflows/build-linux/scripts/combine_results.py
new file mode 100644
index 00000000..b655a492
--- /dev/null
+++ b/workflows/build-linux/scripts/combine_results.py
@@ -0,0 +1,98 @@
+#!/usr/bin/env python3
+"""
+Combine build results from multiple hosts into a single report.
+"""
+
+import json
+import glob
+import sys
+import argparse
+from pathlib import Path
+
+
+def combine_results(results_dir):
+ """Combine summary files from multiple hosts."""
+ results_path = Path(results_dir)
+ summary_files = list(results_path.glob("*_summary_*.json"))
+
+ if not summary_files:
+ print("No summary files found")
+ return 1
+
+ combined = {
+ "hosts": {},
+ "totals": {
+ "total_builds": 0,
+ "successful_builds": 0,
+ "failed_builds": 0,
+ "total_time_hours": 0,
+ },
+ }
+
+ for summary_file in summary_files:
+ with open(summary_file, "r") as f:
+ data = json.load(f)
+ hostname = data["hostname"]
+ combined["hosts"][hostname] = data
+ combined["totals"]["total_builds"] += data["total_builds"]
+ combined["totals"]["successful_builds"] += data["successful_builds"]
+ combined["totals"]["failed_builds"] += data["failed_builds"]
+ combined["totals"]["total_time_hours"] += data["statistics"]["total_hours"]
+
+ # Calculate aggregate statistics
+ all_durations = []
+ for host_data in combined["hosts"].values():
+ if "statistics" in host_data and "average" in host_data["statistics"]:
+ # Approximate durations based on average and count
+ count = host_data["successful_builds"]
+ avg = host_data["statistics"]["average"]
+ all_durations.extend([avg] * count)
+
+ if all_durations:
+ combined["aggregate_stats"] = {
+ "average": sum(all_durations) / len(all_durations),
+ "min": min(all_durations),
+ "max": max(all_durations),
+ "total_builds": len(all_durations),
+ }
+
+ # Save combined report
+ report_file = results_path / "combined_report.json"
+ with open(report_file, "w") as f:
+ json.dump(combined, f, indent=2)
+
+ # Print summary
+ print("Build Linux Combined Results Report")
+ print("====================================")
+ print(f"Total hosts: {len(combined['hosts'])}")
+ print(f"Total builds: {combined['totals']['total_builds']}")
+ print(f"Successful builds: {combined['totals']['successful_builds']}")
+ print(f"Failed builds: {combined['totals']['failed_builds']}")
+ print(f"Total time: {combined['totals']['total_time_hours']:.2f} hours")
+ print()
+
+ for hostname, data in combined["hosts"].items():
+ print(f"Host: {hostname}")
+ print(f" Builds: {data['successful_builds']}/{data['total_builds']}")
+ print(f" Average: {data['statistics']['average']:.2f} seconds")
+ print(
+ f" Min/Max: {data['statistics']['min']:.2f}/{data['statistics']['max']:.2f} seconds"
+ )
+ print()
+
+ print(f"Report saved to: {report_file}")
+ return 0
+
+
+def main():
+ parser = argparse.ArgumentParser(
+ description="Combine build results from multiple hosts"
+ )
+ parser.add_argument("results_dir", help="Directory containing result files")
+ args = parser.parse_args()
+
+ return combine_results(args.results_dir)
+
+
+if __name__ == "__main__":
+ sys.exit(main())
diff --git a/workflows/build-linux/scripts/generate_summaries.py b/workflows/build-linux/scripts/generate_summaries.py
new file mode 100755
index 00000000..2f8aa9ab
--- /dev/null
+++ b/workflows/build-linux/scripts/generate_summaries.py
@@ -0,0 +1,113 @@
+#!/usr/bin/env python3
+"""
+Generate summary files from build_times JSON files.
+"""
+
+import json
+import sys
+import argparse
+from pathlib import Path
+import statistics
+import socket
+import os
+
+
+def generate_summary_from_timing(timing_file):
+ """Generate a summary from a build_times JSON file."""
+ with open(timing_file, "r") as f:
+ timing_data = json.load(f)
+
+ # Extract hostname from filename
+ # Format: hostname_build_times_hostname.json
+ filename = timing_file.stem
+ hostname = filename.split("_build_times_")[-1]
+
+ # Count successful and failed builds
+ successful = [entry for entry in timing_data if entry.get("success", False)]
+ failed = [entry for entry in timing_data if not entry.get("success", False)]
+
+ summary = {
+ "hostname": hostname,
+ "total_builds": len(timing_data),
+ "successful_builds": len(successful),
+ "failed_builds": len(failed),
+ "build_target": "vmlinux", # Default target
+ "make_jobs": os.cpu_count(),
+ "statistics": {},
+ }
+
+ # Calculate statistics for successful builds
+ if successful:
+ durations = [entry["duration"] for entry in successful]
+ summary["statistics"] = {
+ "average": statistics.mean(durations),
+ "median": statistics.median(durations),
+ "min": min(durations),
+ "max": max(durations),
+ "total_time": sum(durations),
+ "total_hours": sum(durations) / 3600,
+ }
+
+ if len(durations) > 1:
+ summary["statistics"]["stddev"] = statistics.stdev(durations)
+ else:
+ # If no successful builds, use all builds for stats
+ durations = [entry["duration"] for entry in timing_data]
+ summary["statistics"] = {
+ "average": statistics.mean(durations) if durations else 0,
+ "median": statistics.median(durations) if durations else 0,
+ "min": min(durations) if durations else 0,
+ "max": max(durations) if durations else 0,
+ "total_time": sum(durations) if durations else 0,
+ "total_hours": sum(durations) / 3600 if durations else 0,
+ }
+
+ return summary
+
+
+def generate_all_summaries(results_dir):
+ """Generate summary files for all build_times JSON files."""
+ results_path = Path(results_dir)
+ timing_files = list(results_path.glob("*_build_times_*.json"))
+
+ if not timing_files:
+ print("No build_times JSON files found")
+ return 1
+
+ print(f"Found {len(timing_files)} build_times files")
+
+ for timing_file in timing_files:
+ summary = generate_summary_from_timing(timing_file)
+
+ # Save summary file
+ hostname = summary["hostname"]
+ summary_file = results_path / f"{hostname}_summary_{hostname}.json"
+
+ with open(summary_file, "w") as f:
+ json.dump(summary, f, indent=2)
+
+ print(f"Generated summary for {hostname}: {summary_file.name}")
+ print(f" Total builds: {summary['total_builds']}")
+ print(f" Successful: {summary['successful_builds']}")
+ print(f" Failed: {summary['failed_builds']}")
+ if summary["statistics"]:
+ print(f" Average time: {summary['statistics']['average']:.2f}s")
+ print()
+
+ return 0
+
+
+def main():
+ parser = argparse.ArgumentParser(
+ description="Generate summary files from build_times JSON files"
+ )
+ parser.add_argument(
+ "results_dir", help="Directory containing build_times JSON files"
+ )
+ args = parser.parse_args()
+
+ return generate_all_summaries(args.results_dir)
+
+
+if __name__ == "__main__":
+ sys.exit(main())
diff --git a/workflows/build-linux/scripts/visualize_results.py b/workflows/build-linux/scripts/visualize_results.py
new file mode 100755
index 00000000..0c0a1018
--- /dev/null
+++ b/workflows/build-linux/scripts/visualize_results.py
@@ -0,0 +1,1015 @@
+#!/usr/bin/env python3
+"""
+Visualize build results with matplotlib graphs and generate HTML reports.
+"""
+
+import json
+import sys
+import argparse
+from pathlib import Path
+import statistics
+from datetime import datetime
+import base64
+from io import BytesIO
+
+# Try to import matplotlib, but make it optional
+try:
+ import matplotlib
+
+ matplotlib.use("Agg") # Use non-interactive backend
+ import matplotlib.pyplot as plt
+ import numpy as np
+
+ MATPLOTLIB_AVAILABLE = True
+except ImportError:
+ MATPLOTLIB_AVAILABLE = False
+ print("Warning: matplotlib not available. Install with your package manager:")
+ print(" Debian/Ubuntu: sudo apt-get install python3-matplotlib python3-numpy")
+ print(" RHEL/Fedora: sudo dnf install python3-matplotlib python3-numpy")
+ print(" SUSE: sudo zypper install python3-matplotlib python3-numpy")
+
+
+def load_all_results(results_dir):
+ """Load all timing and summary data from the results directory."""
+ results_path = Path(results_dir)
+
+ # Load summary files
+ summaries = {}
+ for summary_file in results_path.glob("*_summary_*.json"):
+ with open(summary_file, "r") as f:
+ data = json.load(f)
+ hostname = data["hostname"]
+ summaries[hostname] = data
+
+ # Load timing files
+ timings = {}
+ for timing_file in results_path.glob("*_build_times_*.json"):
+ with open(timing_file, "r") as f:
+ data = json.load(f)
+ # Extract hostname from filename
+ hostname = timing_file.stem.split("_build_times_")[-1]
+ timings[hostname] = data
+
+ # If no summary exists for this host (all builds failed), create one
+ if hostname not in summaries:
+ successful_builds = [r for r in data if r["success"]]
+ failed_builds = [r for r in data if not r["success"]]
+
+ # Determine failure reason from exit codes
+ exit_codes = [r.get("exit_code", -1) for r in failed_builds]
+ if 127 in exit_codes:
+ failure_reason = "Command not found (exit code 127)"
+ elif 137 in exit_codes:
+ failure_reason = "Out of memory (OOM killed, exit code 137)"
+ elif failed_builds:
+ failure_reason = f"Build failures (exit codes: {set(exit_codes)})"
+ else:
+ failure_reason = "Unknown"
+
+ summaries[hostname] = {
+ "hostname": hostname,
+ "total_builds": len(data),
+ "successful_builds": len(successful_builds),
+ "failed_builds": len(failed_builds),
+ "build_target": "unknown",
+ "failure_reason": failure_reason,
+ "statistics": {
+ "average": (
+ statistics.mean([r["duration"] for r in successful_builds])
+ if successful_builds
+ else 0
+ ),
+ "median": (
+ statistics.median(
+ [r["duration"] for r in successful_builds]
+ )
+ if successful_builds
+ else 0
+ ),
+ "min": (
+ min([r["duration"] for r in successful_builds])
+ if successful_builds
+ else 0
+ ),
+ "max": (
+ max([r["duration"] for r in successful_builds])
+ if successful_builds
+ else 0
+ ),
+ "total_time": (
+ sum([r["duration"] for r in successful_builds])
+ if successful_builds
+ else 0
+ ),
+ "total_hours": (
+ sum([r["duration"] for r in successful_builds]) / 3600
+ if successful_builds
+ else 0
+ ),
+ },
+ }
+
+ # Load monitoring data if available
+ monitoring = {}
+ monitoring_dir = results_path / "monitoring"
+ if monitoring_dir.exists():
+ # Load fragmentation data
+ frag_dir = monitoring_dir / "fragmentation"
+ if frag_dir.exists():
+ monitoring["fragmentation"] = {}
+ for frag_file in frag_dir.glob("*_fragmentation_data.json"):
+ hostname = frag_file.stem.replace("_fragmentation_data", "")
+ with open(frag_file, "r") as f:
+ monitoring["fragmentation"][hostname] = json.load(f)
+
+ # Check for folio migration plots
+ monitoring["folio_plots"] = []
+ for plot_file in monitoring_dir.glob("*_folio_migration_plot.png"):
+ monitoring["folio_plots"].append(plot_file.name)
+
+ # Check for stats files
+ monitoring["stats"] = {}
+ for stats_file in monitoring_dir.glob("*_folio_migration_stats.txt"):
+ hostname = stats_file.stem.replace("_folio_migration_stats", "")
+ with open(stats_file, "r") as f:
+ monitoring["stats"][hostname] = f.read()
+
+ return summaries, timings, monitoring
+
+
+def create_build_time_comparison_chart(summaries):
+ """Create a bar chart comparing average build times across hosts."""
+ if not MATPLOTLIB_AVAILABLE:
+ return None
+
+ hosts = list(summaries.keys())
+ avg_times = [summaries[h]["statistics"]["average"] for h in hosts]
+ min_times = [summaries[h]["statistics"]["min"] for h in hosts]
+ max_times = [summaries[h]["statistics"]["max"] for h in hosts]
+
+ fig, ax = plt.subplots(figsize=(12, 6))
+
+ x = np.arange(len(hosts))
+ width = 0.35
+
+ # Create bars with error bars showing min/max
+ bars = ax.bar(x, avg_times, width, label="Average", color="steelblue")
+
+ # Add error bars for min/max with very low alpha for better visibility of numbers
+ errors = [
+ [avg_times[i] - min_times[i] for i in range(len(hosts))],
+ [max_times[i] - avg_times[i] for i in range(len(hosts))],
+ ]
+ ax.errorbar(
+ x,
+ avg_times,
+ yerr=errors,
+ fmt="none",
+ color="black",
+ capsize=5,
+ alpha=0.3,
+ linewidth=1,
+ )
+
+ ax.set_xlabel("Host / Filesystem Configuration")
+ ax.set_ylabel("Build Time (seconds)")
+ ax.set_title("Linux Kernel Build Times Across Different Filesystem Configurations")
+ ax.set_xticks(x)
+ ax.set_xticklabels(
+ [h.replace("lpc-build-linux-", "") for h in hosts], rotation=45, ha="right"
+ )
+ ax.legend()
+ ax.grid(True, alpha=0.3)
+
+ # Add value labels on bars with slight offset to avoid overlap with error bars
+ for i, bar in enumerate(bars):
+ height = bar.get_height()
+ ax.text(
+ bar.get_x() + bar.get_width() / 2.0,
+ height + 1, # Added small offset
+ f"{height:.1f}s",
+ ha="center",
+ va="bottom",
+ fontweight="bold",
+ fontsize=10,
+ )
+
+ # Adjust layout to prevent overlapping labels
+ plt.subplots_adjust(bottom=0.15, top=0.95, left=0.1, right=0.95)
+
+ # Convert to base64 for embedding in HTML
+ buffer = BytesIO()
+ plt.savefig(buffer, format="png", dpi=100, bbox_inches="tight")
+ buffer.seek(0)
+ image_base64 = base64.b64encode(buffer.getvalue()).decode()
+ plt.close()
+
+ return image_base64
+
+
+def create_build_time_distribution(timings):
+ """Create box plots showing distribution of build times."""
+ if not MATPLOTLIB_AVAILABLE:
+ return None
+
+ fig, ax = plt.subplots(figsize=(12, 6))
+
+ hosts = list(timings.keys())
+ data = []
+ labels = []
+
+ for host in hosts:
+ durations = [entry["duration"] for entry in timings[host] if entry["success"]]
+ if durations:
+ data.append(durations)
+ labels.append(host.replace("lpc-build-linux-", ""))
+
+ if not data:
+ return None
+
+ bp = ax.boxplot(data, tick_labels=labels, patch_artist=True)
+
+ # Color the boxes
+ colors = plt.cm.Set3(np.linspace(0, 1, len(data)))
+ for patch, color in zip(bp["boxes"], colors):
+ patch.set_facecolor(color)
+
+ ax.set_xlabel("Host / Filesystem Configuration")
+ ax.set_ylabel("Build Time (seconds)")
+ ax.set_title("Build Time Distribution (20 iterations per configuration)")
+ ax.grid(True, alpha=0.3, axis="y")
+ plt.xticks(rotation=45, ha="right")
+ # Adjust layout to prevent overlapping labels
+ plt.subplots_adjust(bottom=0.15, top=0.95, left=0.1, right=0.95)
+
+ # Convert to base64
+ buffer = BytesIO()
+ plt.savefig(buffer, format="png", dpi=100, bbox_inches="tight")
+ buffer.seek(0)
+ image_base64 = base64.b64encode(buffer.getvalue()).decode()
+ plt.close()
+
+ return image_base64
+
+
+def create_build_timeline(timings):
+ """Create a timeline showing build durations over iterations."""
+ if not MATPLOTLIB_AVAILABLE:
+ return None
+
+ fig, ax = plt.subplots(figsize=(14, 8))
+
+ colors = plt.cm.tab10(np.linspace(0, 1, len(timings)))
+
+ for idx, (host, timing_data) in enumerate(timings.items()):
+ iterations = [entry["iteration"] for entry in timing_data]
+ durations = [entry["duration"] for entry in timing_data]
+
+ ax.plot(
+ iterations,
+ durations,
+ marker="o",
+ label=host.replace("lpc-build-linux-", ""),
+ alpha=0.7,
+ linewidth=2,
+ markersize=6,
+ color=colors[idx],
+ )
+
+ ax.set_xlabel("Build Iteration")
+ ax.set_ylabel("Build Time (seconds)")
+ ax.set_title("Build Time Progression Over Iterations")
+ ax.legend(bbox_to_anchor=(1.05, 1), loc="upper left")
+ ax.grid(True, alpha=0.3)
+
+ # Adjust layout to prevent overlapping labels
+ plt.subplots_adjust(bottom=0.15, top=0.95, left=0.1, right=0.95)
+
+ # Convert to base64
+ buffer = BytesIO()
+ plt.savefig(buffer, format="png", dpi=100, bbox_inches="tight")
+ buffer.seek(0)
+ image_base64 = base64.b64encode(buffer.getvalue()).decode()
+ plt.close()
+
+ return image_base64
+
+
+def create_success_rate_chart(summaries):
+ """Create a chart showing success rates."""
+ if not MATPLOTLIB_AVAILABLE:
+ return None
+
+ fig, ax = plt.subplots(figsize=(10, 6))
+
+ hosts = list(summaries.keys())
+ success_rates = []
+
+ for host in hosts:
+ total = summaries[host]["total_builds"]
+ successful = summaries[host]["successful_builds"]
+ success_rates.append((successful / total) * 100 if total > 0 else 0)
+
+ colors = [
+ "green" if rate == 100 else "orange" if rate >= 95 else "red"
+ for rate in success_rates
+ ]
+
+ bars = ax.bar(range(len(hosts)), success_rates, color=colors, alpha=0.7)
+
+ ax.set_xlabel("Host / Filesystem Configuration")
+ ax.set_ylabel("Success Rate (%)")
+ ax.set_title("Build Success Rates")
+ ax.set_xticks(range(len(hosts)))
+ ax.set_xticklabels(
+ [h.replace("lpc-build-linux-", "") for h in hosts], rotation=45, ha="right"
+ )
+ ax.set_ylim(0, 105)
+ ax.grid(True, alpha=0.3, axis="y")
+
+ # Add value labels on bars
+ for i, (bar, rate) in enumerate(zip(bars, success_rates)):
+ height = bar.get_height()
+ ax.text(
+ bar.get_x() + bar.get_width() / 2.0,
+ height,
+ f"{rate:.1f}%",
+ ha="center",
+ va="bottom",
+ )
+
+ # Adjust layout to prevent overlapping labels
+ plt.subplots_adjust(bottom=0.15, top=0.95, left=0.1, right=0.95)
+
+ # Convert to base64
+ buffer = BytesIO()
+ plt.savefig(buffer, format="png", dpi=100, bbox_inches="tight")
+ buffer.seek(0)
+ image_base64 = base64.b64encode(buffer.getvalue()).decode()
+ plt.close()
+
+ return image_base64
+
+
+def generate_monitoring_section(results_dir, monitoring):
+ """Generate HTML section for monitoring data."""
+ if not monitoring:
+ return ""
+
+ monitoring_path = Path(results_dir) / "monitoring"
+
+ html = """
+ <div class="section">
+ <h2 class="section-title">📊 System Monitoring Data</h2>
+ """
+
+ # Check for folio migration plots
+ folio_plots = sorted(monitoring_path.glob("*_folio_migration_plot.png"))
+
+ # Check for folio migration comparison charts
+ folio_comparison_plots = sorted(monitoring_path.glob("folio_comparison_*.png"))
+
+ # Add folio migration plots if available
+ if folio_plots:
+ html += """
+ <h3>Folio Migration Analysis</h3>
+ <div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(400px, 1fr)); gap: 20px; margin: 20px 0;">
+ """
+
+ for plot_path in folio_plots:
+ hostname = plot_path.stem.replace("_folio_migration_plot", "").replace(
+ "lpc-build-linux-", ""
+ )
+ # Read image and convert to base64
+ try:
+ with open(plot_path, "rb") as f:
+ image_base64 = base64.b64encode(f.read()).decode()
+ html += f"""
+ <div style="text-align: center;">
+ <h4>{hostname}</h4>
+ <a href="monitoring/{plot_path.name}" target="_blank" title="Click for full-size image">
+ <img src="data:image/png;base64,{image_base64}" style="max-width: 100%; border-radius: 5px; box-shadow: 0 4px 6px rgba(0,0,0,0.1); cursor: pointer;" />
+ </a>
+ <p style="font-size: 0.8em; color: #718096; margin-top: 5px;">Click image for full size</p>
+ </div>
+ """
+ except Exception:
+ # Fallback to file link if embedding fails
+ html += f"""
+ <div style="text-align: center;">
+ <h4>{hostname}</h4>
+ <a href="file://{plot_path.absolute()}" style="color: #007bff;">View folio migration plot</a>
+ </div>
+ """
+
+ html += """
+ </div>
+ <p style="margin-top: 10px; color: #718096;">These plots show folio migration patterns during kernel builds.</p>
+ """
+
+ # Add folio migration comparison charts right after individual plots
+ if folio_comparison_plots:
+ html += """
+ <h3>Folio Migration Comparisons</h3>
+ <div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(600px, 1fr)); gap: 20px; margin: 20px 0;">
+ """
+
+ for plot_path in folio_comparison_plots:
+ # Extract comparison name from filename
+ comparison_name = plot_path.stem.replace("folio_comparison_", "").replace(
+ "_", " "
+ )
+ # Read image and convert to base64
+ try:
+ with open(plot_path, "rb") as f:
+ image_base64 = base64.b64encode(f.read()).decode()
+ html += f"""
+ <div style="text-align: center;">
+ <h4 style="text-transform: capitalize;">{comparison_name}</h4>
+ <a href="monitoring/{plot_path.name}" target="_blank" title="Click for full-size image">
+ <img src="data:image/png;base64,{image_base64}" style="max-width: 100%; border-radius: 5px; box-shadow: 0 4px 6px rgba(0,0,0,0.1); cursor: pointer;" />
+ </a>
+ <p style="font-size: 0.8em; color: #718096; margin-top: 5px;">Click image for full size</p>
+ </div>
+ """
+ except Exception:
+ # Fallback to file link if embedding fails
+ html += f"""
+ <div style="text-align: center;">
+ <h4 style="text-transform: capitalize;">{comparison_name}</h4>
+ <a href="file://{plot_path.absolute()}" style="color: #007bff;">View comparison plot</a>
+ </div>
+ """
+
+ html += """
+ </div>
+ <p style="margin-top: 10px; color: #718096;">These charts compare cumulative successful folio migrations across different filesystem configurations. Each line represents a different filesystem with distinct colors.</p>
+ """
+
+ # Check for fragmentation plots and comparison charts
+ frag_plots = []
+ frag_dir = monitoring_path / "fragmentation"
+ if frag_dir.exists():
+ frag_plots = sorted(frag_dir.glob("*_fragmentation_plot.png"))
+
+ # Add fragmentation plots if available
+ if frag_plots:
+ html += """
+ <h3>Memory Fragmentation Analysis</h3>
+ <div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(400px, 1fr)); gap: 20px; margin: 20px 0;">
+ """
+
+ for plot_path in frag_plots:
+ hostname = plot_path.stem.replace("_fragmentation_plot", "").replace(
+ "lpc-build-linux-", ""
+ )
+ # Read image and convert to base64
+ try:
+ with open(plot_path, "rb") as f:
+ image_base64 = base64.b64encode(f.read()).decode()
+ html += f"""
+ <div style="text-align: center;">
+ <h4>{hostname}</h4>
+ <a href="monitoring/fragmentation/{plot_path.name}" target="_blank" title="Click for full-size image">
+ <img src="data:image/png;base64,{image_base64}" style="max-width: 100%; border-radius: 5px; box-shadow: 0 4px 6px rgba(0,0,0,0.1); cursor: pointer;" />
+ </a>
+ <p style="font-size: 0.8em; color: #718096; margin-top: 5px;">Click image for full size</p>
+ </div>
+ """
+ except Exception:
+ # Fallback to file link if embedding fails
+ html += f"""
+ <div style="text-align: center;">
+ <h4>{hostname}</h4>
+ <a href="file://{plot_path.absolute()}" style="color: #007bff;">View fragmentation plot</a>
+ </div>
+ """
+
+ html += """
+ </div>
+ <p style="margin-top: 10px; color: #718096;">These plots show memory fragmentation patterns during kernel builds.</p>
+ """
+
+ # Check for fragmentation comparison charts
+ frag_comparison_plots = []
+ if frag_dir.exists():
+ frag_comparison_plots = sorted(frag_dir.glob("comparison_*.png"))
+
+ # Add fragmentation comparison charts if available
+ if frag_comparison_plots:
+ html += """
+ <h3>Memory Fragmentation Comparisons</h3>
+ <div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(500px, 1fr)); gap: 20px; margin: 20px 0;">
+ """
+
+ for plot_path in frag_comparison_plots:
+ # Extract comparison name from filename (e.g., "xfs_4k_vs_16k" from "comparison_xfs_4k_vs_16k.png")
+ comparison_name = plot_path.stem.replace("comparison_", "").replace(
+ "_", " "
+ )
+ # Read image and convert to base64
+ try:
+ with open(plot_path, "rb") as f:
+ image_base64 = base64.b64encode(f.read()).decode()
+ html += f"""
+ <div style="text-align: center;">
+ <h4 style="text-transform: capitalize;">{comparison_name}</h4>
+ <a href="monitoring/fragmentation/{plot_path.name}" target="_blank" title="Click for full-size image">
+ <img src="data:image/png;base64,{image_base64}" style="max-width: 100%; border-radius: 5px; box-shadow: 0 4px 6px rgba(0,0,0,0.1); cursor: pointer;" />
+ </a>
+ <p style="font-size: 0.8em; color: #718096; margin-top: 5px;">Click image for full size</p>
+ </div>
+ """
+ except Exception:
+ # Fallback to file link if embedding fails
+ html += f"""
+ <div style="text-align: center;">
+ <h4 style="text-transform: capitalize;">{comparison_name}</h4>
+ <a href="file://{plot_path.absolute()}" style="color: #007bff;">View comparison plot</a>
+ </div>
+ """
+
+ html += """
+ </div>
+ <p style="margin-top: 10px; color: #718096;">These charts compare memory fragmentation between different filesystem configurations.</p>
+ """
+
+ # If no plots are available, show a message
+ if (
+ not frag_plots
+ and not folio_plots
+ and not frag_comparison_plots
+ and not folio_comparison_plots
+ ):
+ html += """
+ <div style="text-align: center; padding: 40px; color: #718096; font-style: italic;">
+ No monitoring plots found. Ensure monitoring is enabled during builds.
+ </div>
+ """
+
+ html += "</div>"
+ return html
+
+
+def generate_html_report(results_dir, summaries, timings, monitoring=None):
+ """Generate an HTML report with embedded graphs."""
+
+ # Generate graphs
+ comparison_chart = create_build_time_comparison_chart(summaries)
+ distribution_chart = create_build_time_distribution(timings)
+ timeline_chart = create_build_timeline(timings)
+ success_chart = create_success_rate_chart(summaries)
+
+ # Calculate overall statistics
+ total_builds = sum(s["total_builds"] for s in summaries.values())
+ successful_builds = sum(s["successful_builds"] for s in summaries.values())
+ failed_builds = sum(s["failed_builds"] for s in summaries.values())
+ total_time_hours = sum(s["statistics"]["total_hours"] for s in summaries.values())
+
+ # Get all durations for overall statistics
+ all_durations = []
+ for timing_data in timings.values():
+ all_durations.extend(
+ [entry["duration"] for entry in timing_data if entry["success"]]
+ )
+
+ if all_durations:
+ overall_avg = statistics.mean(all_durations)
+ overall_median = statistics.median(all_durations)
+ overall_stdev = statistics.stdev(all_durations) if len(all_durations) > 1 else 0
+ overall_min = min(all_durations)
+ overall_max = max(all_durations)
+ else:
+ overall_avg = overall_median = overall_stdev = overall_min = overall_max = 0
+
+ html_content = f"""<!DOCTYPE html>
+<html>
+<head>
+ <title>Linux Kernel Build Performance Report</title>
+ <meta charset="utf-8">
+ <style>
+ body {{
+ font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
+ margin: 0;
+ padding: 20px;
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
+ min-height: 100vh;
+ }}
+ .container {{
+ max-width: 1400px;
+ margin: 0 auto;
+ background: white;
+ border-radius: 10px;
+ padding: 30px;
+ box-shadow: 0 20px 60px rgba(0,0,0,0.3);
+ }}
+ h1 {{
+ color: #2d3748;
+ text-align: center;
+ font-size: 2.5em;
+ margin-bottom: 10px;
+ text-shadow: 2px 2px 4px rgba(0,0,0,0.1);
+ }}
+ .timestamp {{
+ text-align: center;
+ color: #718096;
+ margin-bottom: 30px;
+ font-size: 0.9em;
+ }}
+ .summary-grid {{
+ display: grid;
+ grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
+ gap: 20px;
+ margin: 30px 0;
+ }}
+ .stat-card {{
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
+ color: white;
+ padding: 20px;
+ border-radius: 10px;
+ text-align: center;
+ box-shadow: 0 4px 6px rgba(0,0,0,0.1);
+ transition: transform 0.2s;
+ }}
+ .stat-card:hover {{
+ transform: translateY(-5px);
+ box-shadow: 0 8px 12px rgba(0,0,0,0.2);
+ }}
+ .stat-value {{
+ font-size: 2em;
+ font-weight: bold;
+ margin: 10px 0;
+ }}
+ .stat-label {{
+ font-size: 0.9em;
+ opacity: 0.9;
+ }}
+ .chart-container {{
+ margin: 40px 0;
+ padding: 20px;
+ background: #f7fafc;
+ border-radius: 10px;
+ box-shadow: inset 0 2px 4px rgba(0,0,0,0.06);
+ }}
+ .chart-title {{
+ font-size: 1.3em;
+ color: #2d3748;
+ margin-bottom: 20px;
+ text-align: center;
+ font-weight: 600;
+ }}
+ .chart {{
+ text-align: center;
+ margin: 20px 0;
+ }}
+ .chart img {{
+ max-width: 100%;
+ border-radius: 5px;
+ box-shadow: 0 4px 6px rgba(0,0,0,0.1);
+ }}
+ table {{
+ width: 100%;
+ border-collapse: collapse;
+ margin: 20px 0;
+ background: white;
+ border-radius: 10px;
+ overflow: hidden;
+ box-shadow: 0 4px 6px rgba(0,0,0,0.1);
+ }}
+ th {{
+ background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
+ color: white;
+ padding: 15px;
+ text-align: left;
+ font-weight: 600;
+ }}
+ td {{
+ padding: 12px 15px;
+ border-bottom: 1px solid #e2e8f0;
+ }}
+ tr:hover {{
+ background: #f7fafc;
+ }}
+ tr:last-child td {{
+ border-bottom: none;
+ }}
+ .success {{
+ color: #48bb78;
+ font-weight: bold;
+ }}
+ .failure {{
+ color: #f56565;
+ font-weight: bold;
+ }}
+ .section {{
+ margin: 40px 0;
+ }}
+ .section-title {{
+ font-size: 1.8em;
+ color: #2d3748;
+ margin-bottom: 20px;
+ padding-bottom: 10px;
+ border-bottom: 3px solid #667eea;
+ }}
+ .no-data {{
+ text-align: center;
+ padding: 40px;
+ color: #718096;
+ font-style: italic;
+ }}
+ </style>
+</head>
+<body>
+ <div class="container">
+ <h1>🚀 Linux Kernel Build Performance Report</h1>
+ <div class="timestamp">Generated on {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}</div>
+
+ <div class="summary-grid">
+ <div class="stat-card">
+ <div class="stat-label">Total Builds</div>
+ <div class="stat-value">{total_builds}</div>
+ </div>
+ <div class="stat-card">
+ <div class="stat-label">Success Rate</div>
+ <div class="stat-value">{(successful_builds/total_builds*100):.1f}%</div>
+ </div>
+ <div class="stat-card">
+ <div class="stat-label">Total Time</div>
+ <div class="stat-value">{total_time_hours:.1f}h</div>
+ </div>
+ <div class="stat-card">
+ <div class="stat-label">Average Build</div>
+ <div class="stat-value">{overall_avg:.1f}s</div>
+ </div>
+ <div class="stat-card">
+ <div class="stat-label">Configurations</div>
+ <div class="stat-value">{len(summaries)}</div>
+ </div>
+ </div>
+
+ <div class="section">
+ <h2 class="section-title">📊 Build Time Comparison</h2>
+ {"<div class='chart-container'><div class='chart'><img src='data:image/png;base64," + comparison_chart + "' /></div></div>" if comparison_chart else "<div class='no-data'>Graph generation requires matplotlib</div>"}
+ </div>
+
+ <div class="section">
+ <h2 class="section-title">📈 Build Time Distribution</h2>
+ {"<div class='chart-container'><div class='chart'><img src='data:image/png;base64," + distribution_chart + "' /></div></div>" if distribution_chart else "<div class='no-data'>Graph generation requires matplotlib</div>"}
+ </div>
+
+ <div class="section">
+ <h2 class="section-title">📉 Build Time Timeline</h2>
+ {"<div class='chart-container'><div class='chart'><img src='data:image/png;base64," + timeline_chart + "' /></div></div>" if timeline_chart else "<div class='no-data'>Graph generation requires matplotlib</div>"}
+ </div>
+
+ <div class="section">
+ <h2 class="section-title">✅ Success Rates</h2>
+ {"<div class='chart-container'><div class='chart'><img src='data:image/png;base64," + success_chart + "' /></div></div>" if success_chart else "<div class='no-data'>Graph generation requires matplotlib</div>"}
+ </div>
+
+ <div class="section">
+ <h2 class="section-title">📋 Detailed Results by Host</h2>
+ <table>
+ <thead>
+ <tr>
+ <th>Host/Configuration</th>
+ <th>Filesystem</th>
+ <th>Total Builds</th>
+ <th>Successful</th>
+ <th>Failed</th>
+ <th>Success Rate</th>
+ <th>Average (s)</th>
+ <th>Median (s)</th>
+ <th>Min (s)</th>
+ <th>Max (s)</th>
+ <th>Std Dev (s)</th>
+ </tr>
+ </thead>
+ <tbody>
+"""
+
+ for hostname in sorted(summaries.keys()):
+ data = summaries[hostname]
+ stats = data["statistics"]
+ success_rate = (
+ (data["successful_builds"] / data["total_builds"] * 100)
+ if data["total_builds"] > 0
+ else 0
+ )
+
+ # Extract filesystem type from hostname
+ if "xfs" in hostname:
+ if "64k" in hostname:
+ fs_type = "XFS (64k blocks)"
+ elif "32k" in hostname:
+ fs_type = "XFS (32k blocks)"
+ elif "16k" in hostname:
+ fs_type = "XFS (16k blocks)"
+ elif "8k" in hostname:
+ fs_type = "XFS (8k blocks)"
+ else:
+ fs_type = "XFS (4k blocks)"
+ elif "ext4" in hostname:
+ fs_type = "EXT4"
+ elif "btrfs" in hostname:
+ fs_type = "Btrfs"
+ else:
+ fs_type = "Unknown"
+
+ # Handle completely failed hosts
+ if data["successful_builds"] == 0 and "failure_reason" in data:
+ html_content += f"""
+ <tr style="background-color: #ffe6e6;">
+ <td><strong>{hostname.replace('lpc-build-linux-', '')}</strong></td>
+ <td>{fs_type}</td>
+ <td>{data['total_builds']}</td>
+ <td class="success">0</td>
+ <td class="failure">{data['failed_builds']}</td>
+ <td>0.0%</td>
+ <td colspan="5" style="text-align: center; color: #d9534f;">
+ <strong>All builds failed:</strong> {data['failure_reason']}
+ </td>
+ </tr>
+"""
+ else:
+ html_content += f"""
+ <tr>
+ <td><strong>{hostname.replace('lpc-build-linux-', '')}</strong></td>
+ <td>{fs_type}</td>
+ <td>{data['total_builds']}</td>
+ <td class="success">{data['successful_builds']}</td>
+ <td class="{'failure' if data['failed_builds'] > 0 else ''}">{data['failed_builds']}</td>
+ <td>{success_rate:.1f}%</td>
+ <td>{stats['average']:.2f}</td>
+ <td>{stats['median']:.2f}</td>
+ <td>{stats['min']:.2f}</td>
+ <td>{stats['max']:.2f}</td>
+ <td>{stats.get('stddev', 0):.2f}</td>
+ </tr>
+"""
+
+ html_content += f"""
+ </tbody>
+ </table>
+ </div>
+
+ <div class="section">
+ <h2 class="section-title">📊 Overall Statistics</h2>
+ <table>
+ <tr>
+ <th>Metric</th>
+ <th>Value</th>
+ </tr>
+ <tr>
+ <td>Overall Average Build Time</td>
+ <td>{overall_avg:.2f} seconds</td>
+ </tr>
+ <tr>
+ <td>Overall Median Build Time</td>
+ <td>{overall_median:.2f} seconds</td>
+ </tr>
+ <tr>
+ <td>Overall Standard Deviation</td>
+ <td>{overall_stdev:.2f} seconds</td>
+ </tr>
+ <tr>
+ <td>Fastest Build</td>
+ <td>{overall_min:.2f} seconds</td>
+ </tr>
+ <tr>
+ <td>Slowest Build</td>
+ <td>{overall_max:.2f} seconds</td>
+ </tr>
+ <tr>
+ <td>Total CPU Time</td>
+ <td>{total_time_hours:.2f} hours</td>
+ </tr>
+ </table>
+ </div>
+
+ {generate_monitoring_section(results_dir, monitoring) if monitoring else ""}
+
+ </div>
+</body>
+</html>"""
+
+ # Save HTML report
+ report_path = Path(results_dir) / "build_performance_report.html"
+ with open(report_path, "w") as f:
+ f.write(html_content)
+
+ return report_path
+
+
+def consolidate_html_output(results_dir):
+ """Create HTML directory with embedded report and PNG files for full-size viewing."""
+ import shutil
+
+ html_dir = Path(results_dir) / "html"
+ html_dir.mkdir(exist_ok=True)
+
+ # Copy the main HTML report (which has everything embedded as base64)
+ src_html = Path(results_dir) / "build_performance_report.html"
+ if src_html.exists():
+ shutil.copy2(src_html, html_dir / "index.html")
+
+ # Copy monitoring PNG files if they exist
+ monitoring_path = Path(results_dir) / "monitoring"
+ if monitoring_path.exists():
+ # Create monitoring subdirectory in HTML folder
+ html_monitoring = html_dir / "monitoring"
+ html_monitoring.mkdir(exist_ok=True)
+
+ # Copy folio migration plots
+ for plot in monitoring_path.glob("*_folio_migration_plot.png"):
+ shutil.copy2(plot, html_monitoring / plot.name)
+
+ # Copy folio comparison plots
+ for plot in monitoring_path.glob("folio_comparison_*.png"):
+ shutil.copy2(plot, html_monitoring / plot.name)
+
+ # Copy fragmentation plots
+ frag_dir = monitoring_path / "fragmentation"
+ if frag_dir.exists():
+ html_frag = html_monitoring / "fragmentation"
+ html_frag.mkdir(exist_ok=True)
+
+ for plot in frag_dir.glob("*_fragmentation_plot.png"):
+ shutil.copy2(plot, html_frag / plot.name)
+
+ # Copy all comparison plots (including new A/B comparisons)
+ for plot in frag_dir.glob("comparison_*.png"):
+ shutil.copy2(plot, html_frag / plot.name)
+
+ # Copy migration analysis plots
+ for plot in frag_dir.glob("migration_analysis_*.png"):
+ shutil.copy2(plot, html_frag / plot.name)
+
+ # Calculate total size
+ total_size = sum(f.stat().st_size for f in html_dir.rglob("*") if f.is_file())
+ size_mb = total_size / (1024 * 1024)
+
+ # Count files
+ png_count = len(list(html_dir.rglob("*.png")))
+
+ print(f"\n📦 Consolidated HTML output: {html_dir}")
+ print(f" Main report: index.html (self-contained with embedded images)")
+ print(f" PNG files: {png_count} (for full-size viewing)")
+ print(f" Total size: {size_mb:.1f} MB")
+ print(f"\n To share results, copy entire directory: {html_dir}/")
+
+ return html_dir
+
+
+def main():
+ parser = argparse.ArgumentParser(
+ description="Visualize build results with graphs and HTML reports"
+ )
+ parser.add_argument("results_dir", help="Directory containing result files")
+ parser.add_argument("--no-html", action="store_true", help="Skip HTML generation")
+ args = parser.parse_args()
+
+ results_dir = Path(args.results_dir)
+ if not results_dir.exists():
+ print(f"Error: Results directory '{results_dir}' does not exist")
+ return 1
+
+ # Load data
+ print("Loading results...")
+ summaries, timings, monitoring = load_all_results(results_dir)
+
+ if not summaries:
+ print("No summary files found in the results directory")
+ return 1
+
+ print(f"Found {len(summaries)} host configurations")
+ print(f"Found {len(timings)} timing datasets")
+
+ if not args.no_html:
+ print("Generating HTML report...")
+ report_path = generate_html_report(results_dir, summaries, timings, monitoring)
+ print(f"✅ HTML report generated: {report_path}")
+ print(f" Open in browser: file://{report_path.absolute()}")
+
+ # Consolidate all HTML output files
+ html_dir = consolidate_html_output(results_dir)
+
+ # Print summary to console
+ print("\n" + "=" * 60)
+ print("Build Performance Summary")
+ print("=" * 60)
+
+ for hostname in sorted(summaries.keys()):
+ data = summaries[hostname]
+ print(f"\n{hostname}:")
+ print(f" Successful: {data['successful_builds']}/{data['total_builds']}")
+ print(f" Average: {data['statistics']['average']:.2f}s")
+ print(
+ f" Min/Max: {data['statistics']['min']:.2f}s / {data['statistics']['max']:.2f}s"
+ )
+
+ return 0
+
+
+if __name__ == "__main__":
+ sys.exit(main())
diff --git a/workflows/linux/Kconfig b/workflows/linux/Kconfig
index 1b057042..428a3022 100644
--- a/workflows/linux/Kconfig
+++ b/workflows/linux/Kconfig
@@ -260,6 +260,7 @@ config BOOTLINUX_TREE_NAME
config BOOTLINUX_TREE
string
+ output yaml
default BOOTLINUX_TREE_LINUS_URL if BOOTLINUX_TREE_LINUS
default BOOTLINUX_TREE_STABLE_URL if BOOTLINUX_TREE_STABLE
default BOOTLINUX_TREE_STABLE_RC_URL if BOOTLINUX_TREE_STABLE_RC
--
2.51.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] build-linux: add workflow for repeated kernel builds
2025-09-19 3:51 [PATCH] build-linux: add workflow for repeated kernel builds Luis Chamberlain
@ 2025-09-19 8:29 ` Daniel Gomez
2025-09-19 18:06 ` Luis Chamberlain
0 siblings, 1 reply; 3+ messages in thread
From: Daniel Gomez @ 2025-09-19 8:29 UTC (permalink / raw)
To: Luis Chamberlain, Chuck Lever, Daniel Gomez, kdevops; +Cc: David Bueso
On 19/09/2025 05.51, Luis Chamberlain wrote:
> Add a new workflow that allows building the Linux kernel multiple times
> to measure build time variations and performance. This is useful for
> benchmarking build systems and compiler performance testing.
>
> This goes in with monitoring support so we can do AB testing against
> different filesystems.
>
> Generated-by: Claude AI
> Suggested-by: David Bueso <dave@stgolabs.net>
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
>
> Demo of visualization of results:
>
> https://htmlpreview.github.io/?https://github.com/mcgrof/plot-build/blob/main/index.html
This is really interesting and cool work.
However, I'm generally a strong advocate for reusing projects as much as
possible. So this feels like a missed opportunity not to build on existing
projects designed for this purpose (e.g. hyperfine [1]).
Link: https://github.com/sharkdp/hyperfine [1]
It would be helpful to understand the reasoning behind choosing one
approach (reusing something like hyperfine) over another (adding custom ansible
tasks/playbooks...)?
For reference, I put together a quick and hacky example for benchmarking kernel
builds using hyperfine and a simple Makefile, available here:
https://github.com/dkruces/linux-benchmarks/
Report example:
https://github.com/dkruces/linux-benchmarks/blob/automation-make/results/mac1611/REPORT.md
To clarify, I'm not oppose in any form to the work here. It's really great.
Please, merge & push! :)
One think I'd like to see as part of the report is:
* The kernel version
* The kernel configuration. Hopefully, the fragment work allows to have
better control on this. And better when we have SAT support in the kernel
* Toolchain: GCC, LLVM/Clang
* Host info: baremetal/vm, number of cores, memory available, etc.
I've also noticed some inconsistencies between runs, which could be due to
memory behavior, folio migration, fragmentation, and similar factors. To ensure
full confidence when comparing build times, we should rely on reproducible
builds [2].
Link: https://docs.kernel.org/kbuild/reproducible-builds.html#reproducible-builds [2]
FYI, I have reproducible build support patches ready. I'll post them today.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] build-linux: add workflow for repeated kernel builds
2025-09-19 8:29 ` Daniel Gomez
@ 2025-09-19 18:06 ` Luis Chamberlain
0 siblings, 0 replies; 3+ messages in thread
From: Luis Chamberlain @ 2025-09-19 18:06 UTC (permalink / raw)
To: Daniel Gomez; +Cc: Chuck Lever, Daniel Gomez, kdevops, David Bueso
On Fri, Sep 19, 2025 at 10:29:45AM +0200, Daniel Gomez wrote:
> On 19/09/2025 05.51, Luis Chamberlain wrote:
> > Add a new workflow that allows building the Linux kernel multiple times
> > to measure build time variations and performance. This is useful for
> > benchmarking build systems and compiler performance testing.
> >
> > This goes in with monitoring support so we can do AB testing against
> > different filesystems.
> >
> > Generated-by: Claude AI
> > Suggested-by: David Bueso <dave@stgolabs.net>
> > Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> > ---
> >
> > Demo of visualization of results:
> >
> > https://htmlpreview.github.io/?https://github.com/mcgrof/plot-build/blob/main/index.html
>
> This is really interesting and cool work.
>
> However, I'm generally a strong advocate for reusing projects as much as
> possible. So this feels like a missed opportunity not to build on existing
> projects designed for this purpose (e.g. hyperfine [1]).
>
> Link: https://github.com/sharkdp/hyperfine [1]
>
> It would be helpful to understand the reasoning behind choosing one
> approach (reusing something like hyperfine) over another (adding custom ansible
> tasks/playbooks...)?
>
> For reference, I put together a quick and hacky example for benchmarking kernel
> builds using hyperfine and a simple Makefile, available here:
>
> https://github.com/dkruces/linux-benchmarks/
>
> Report example:
> https://github.com/dkruces/linux-benchmarks/blob/automation-make/results/mac1611/REPORT.md
Awesome!
> To clarify, I'm not oppose in any form to the work here. It's really great.
> Please, merge & push! :)
I agree that re-use is crucial. The only complexities in re-use is license
but MIT is compatible with copyleft-next so we should be able to re-use it.
> One think I'd like to see as part of the report is:
> * The kernel version
> * The kernel configuration. Hopefully, the fragment work allows to have
> better control on this. And better when we have SAT support in the kernel
> * Toolchain: GCC, LLVM/Clang
> * Host info: baremetal/vm, number of cores, memory available, etc.
>
> I've also noticed some inconsistencies between runs, which could be due to
> memory behavior, folio migration, fragmentation, and similar factors. To ensure
> full confidence when comparing build times, we should rely on reproducible
> builds [2].
>
> Link: https://docs.kernel.org/kbuild/reproducible-builds.html#reproducible-builds [2]
>
> FYI, I have reproducible build support patches ready. I'll post them today.
Swweet!
I'll merge and what I think we can do is add a choice option to let us
pick the method. We can later just kill the silliest method later.
For now I just need data for evaluation of memory fragmentation runs.
Luis
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-09-19 18:06 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-19 3:51 [PATCH] build-linux: add workflow for repeated kernel builds Luis Chamberlain
2025-09-19 8:29 ` Daniel Gomez
2025-09-19 18:06 ` Luis Chamberlain
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox