public inbox for kdevops@lists.linux.dev
 help / color / mirror / Atom feed
* [PATCH] workflows: add sysbench MySQL TPS variability support
@ 2024-09-14  0:53 Luis Chamberlain
  0 siblings, 0 replies; only message in thread
From: Luis Chamberlain @ 2024-09-14  0:53 UTC (permalink / raw)
  To: kdevops, da.gomez, john.g.garry, djwong; +Cc: mcgrof

Drives which support large power-fail safe writes above 4 KiB can
benefit from a disabling database features which help increase TPS and
reduce TPS variability by larger factors. Add support to help developers
to test and experiment with this.

This adds initial support using Docker and MySQL but can easily be
extended to support PostgreSQL and, can also leverage host side
solutions without docker. This started out with scripts to help
speed up testing [0], this is the conversion of proper automation
support in kdevops.

Telemetry is supported by leveraging a MySQLsh plugin and some adhoc
sysbench TPS json output scripts.

Support is added using LBS on XFS and also ext4 with bigalloc. The
methodology adopted for filesystem support should make it *extremely*
easy to extend this to add other filesystems and other filesystem
configurations: mostly kconfig edits. This helps showcase the value of
the link between variables on kconfig and now the automatic ansible
variables we generate. This shoudl simplify new workloads considerably.

[0] https://github.com/mcgrof/plot-sysbench

Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---

I already folded into kdevops support for the vfs tree and vfs.blocksize
branch. This leverages that, and also is an example of what a new
workflow looks like with the new selective yaml output. I've only gotten
all make targets finally to work, so next will be to actually do the
introspection and compare it to what I had on plot-sysbench. I also
migrated the TPS parsing to leverage the sysbench json output, I haven't
yet gotten to the full examination of how correct those scripts are yet.

Finally I've been trying to see how hard it would be for ChatGPT to do
some work for us on kdevops [1], and although I've had mixed results I
figured I'd share its a great primer for structure. It was in no way
shape or form close to what was needed in the end, however it was very
useful in oddball ansible quirks. It also did all the conversions from
plot-sysbench to support sysbench json output, I haven't tested those
yet, however they are pretty straight forward, and plotting TPS is not
yet hooked up. Once I test them I'll hook the plot of the TPS as the
last part of automation.

I tested with VMs, but I had tested plot-sysbench on the cloud with AWS
too. Although initial cloud support is there, some more work would be
required to ensure the right drive is used and symlinks used for tons
of directories, as some cloud nodes come up with a pretty small primary
partition.

[1] https://chatgpt.com/share/abab6b23-829c-4da0-889c-4e1ee96c084c

 .gitignore                                    |   2 +
 .../sysbench-mysql-atomic-tps-variability     |  29 +
 kconfigs/workflows/Kconfig                    |  26 +
 .../sysbench/sysbench-tps-compare.py          | 115 ++++
 .../workflows/sysbench/sysbench-tps-plot.py   |  92 +++
 .../sysbench/sysbench-tps-variance.py         | 108 ++++
 playbooks/roles/gen_hosts/tasks/main.yml      |  55 ++
 .../roles/gen_hosts/templates/sysbench.j2     |  23 +
 playbooks/roles/gen_nodes/defaults/main.yml   |   2 +
 playbooks/roles/gen_nodes/tasks/main.yml      |  86 +++
 playbooks/roles/sysbench/defaults/main.yml    |  52 ++
 .../tasks/install-deps/debian/main.yml        |  31 +
 .../sysbench/tasks/install-deps/main.yml      |  12 +
 .../tasks/install-deps/redhat/main.yml        |  43 ++
 .../sysbench/tasks/install-deps/suse/main.yml |  67 +++
 playbooks/roles/sysbench/tasks/main.yaml      | 530 ++++++++++++++++++
 .../roles/sysbench/templates/mysql.conf.j2    |  94 ++++
 .../0001-setup-sysbench-permissions.sh        |   3 +
 .../0002-install-mysql-shell-plugin-reqs.sh   |   3 +
 .../post-entrypoint-custom-bringup.sh         |  11 +
 playbooks/sysbench.yml                        |   4 +
 workflows/Makefile                            |   4 +
 workflows/sysbench/Kconfig                    | 127 +++++
 workflows/sysbench/Kconfig.docker             | 130 +++++
 workflows/sysbench/Kconfig.ext4               | 124 ++++
 workflows/sysbench/Kconfig.fs                 | 173 ++++++
 workflows/sysbench/Kconfig.xfs                | 162 ++++++
 workflows/sysbench/Makefile                   |  66 +++
 28 files changed, 2174 insertions(+)
 create mode 100644 defconfigs/sysbench-mysql-atomic-tps-variability
 create mode 100755 playbooks/python/workflows/sysbench/sysbench-tps-compare.py
 create mode 100755 playbooks/python/workflows/sysbench/sysbench-tps-plot.py
 create mode 100755 playbooks/python/workflows/sysbench/sysbench-tps-variance.py
 create mode 100644 playbooks/roles/gen_hosts/templates/sysbench.j2
 create mode 100644 playbooks/roles/sysbench/defaults/main.yml
 create mode 100644 playbooks/roles/sysbench/tasks/install-deps/debian/main.yml
 create mode 100644 playbooks/roles/sysbench/tasks/install-deps/main.yml
 create mode 100644 playbooks/roles/sysbench/tasks/install-deps/redhat/main.yml
 create mode 100644 playbooks/roles/sysbench/tasks/install-deps/suse/main.yml
 create mode 100644 playbooks/roles/sysbench/tasks/main.yaml
 create mode 100644 playbooks/roles/sysbench/templates/mysql.conf.j2
 create mode 100755 playbooks/roles/sysbench/templates/root-user/0001-setup-sysbench-permissions.sh
 create mode 100755 playbooks/roles/sysbench/templates/root-user/0002-install-mysql-shell-plugin-reqs.sh
 create mode 100755 playbooks/roles/sysbench/templates/root-user/post-entrypoint-custom-bringup.sh
 create mode 100644 playbooks/sysbench.yml
 create mode 100644 workflows/sysbench/Kconfig
 create mode 100644 workflows/sysbench/Kconfig.docker
 create mode 100644 workflows/sysbench/Kconfig.ext4
 create mode 100644 workflows/sysbench/Kconfig.fs
 create mode 100644 workflows/sysbench/Kconfig.xfs
 create mode 100644 workflows/sysbench/Makefile

diff --git a/.gitignore b/.gitignore
index ba88777b7829..9138399a8fbc 100644
--- a/.gitignore
+++ b/.gitignore
@@ -65,6 +65,8 @@ workflows/gitr/results/
 workflows/ltp/results/
 workflows/nfstest/results/
 
+workflows/sysbench/results/
+
 playbooks/roles/linux-mirror/linux-mirror-systemd/*.service
 playbooks/roles/linux-mirror/linux-mirror-systemd/*.timer
 playbooks/roles/linux-mirror/linux-mirror-systemd/mirrors.yaml
diff --git a/defconfigs/sysbench-mysql-atomic-tps-variability b/defconfigs/sysbench-mysql-atomic-tps-variability
new file mode 100644
index 000000000000..26ccca43e26f
--- /dev/null
+++ b/defconfigs/sysbench-mysql-atomic-tps-variability
@@ -0,0 +1,29 @@
+CONFIG_GUESTFS=y
+CONFIG_LIBVIRT=y
+
+CONFIG_WORKFLOWS=y
+CONFIG_WORKFLOW_LINUX_CUSTOM=y
+
+CONFIG_BOOTLINUX=y
+CONFIG_BOOTLINUX_9P=y
+CONFIG_BOOTLINUX_LINUS=y
+CONFIG_BOOTLINUX_TREE_VFS=y
+CONFIG_BOOTLINUX_TREE_VFS_REF_LBS=y
+CONFIG_BOOTLINUX_TREE_VFS_REF="vfs.blocksize"
+
+CONFIG_WORKFLOWS_TESTS=y
+CONFIG_WORKFLOWS_LINUX_TESTS=y
+CONFIG_WORKFLOWS_DEDICATED_WORKFLOW=y
+CONFIG_KDEVOPS_WORKFLOW_DEDICATE_SYSBENCH=y
+CONFIG_KDEVOPS_WORKFLOW_ENABLE_SYSBENCH=y
+
+CONFIG_SYSBENCH_DOCKER=y
+CONFIG_SYSBENCH_TYPE_MYSQL_DOCKER=y
+CONFIG_SYSBENCH_MYSQL_CONTAINER_IMAGE_8_0=y
+
+CONFIG_SYSBENCH_TEST_ATOMICS=y
+CONFIG_SYSBENCH_TEST_ATOMICS_TPS_VARIABILITY=y
+CONFIG_SYSBENCH_TEST_ATOMICS_XFS_16K_4KS_LBS=y
+CONFIG_SYSBENCH_TEST_ATOMICS_EXT4_4K_4KS_BIGALLOC_16K=y
+
+CONFIG_DEVCONFIG_ENABLE_SYSTEMD_JOURNAL_REMOTE=y
diff --git a/kconfigs/workflows/Kconfig b/kconfigs/workflows/Kconfig
index 6530ce3cf50e..c81e5bb524c4 100644
--- a/kconfigs/workflows/Kconfig
+++ b/kconfigs/workflows/Kconfig
@@ -175,6 +175,13 @@ config KDEVOPS_WORKFLOW_DEDICATE_NFSTEST
 	  This will dedicate your configuration to running only the
 	  nfstest workflow in separate target nodes per testing group.
 
+config KDEVOPS_WORKFLOW_DEDICATE_SYSBENCH
+	bool "sysbench"
+	select KDEVOPS_WORKFLOW_ENABLE_SYSBENCH
+	help
+	  This will dedicate your configuration to running only the
+	  sysbench workflow.
+
 endchoice
 
 endif
@@ -256,6 +263,14 @@ config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_NFSTEST
 	  Select this option if you want to provision nfstest on a
 	  single target node for by-hand testing.
 
+config KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_SYSBENCH
+	bool "sysbench"
+	select KDEVOPS_WORKFLOW_ENABLE_SYSBENCH
+	depends on LIBVIRT || TERRAFORM_PRIVATE_NET
+	help
+	  Select this option if you want to provision sysbench on a
+	  single target node for by-hand testing.
+
 endif # !WORKFLOWS_DEDICATED_WORKFLOW
 
 config KDEVOPS_WORKFLOW_ENABLE_FSTESTS
@@ -347,6 +362,17 @@ source "workflows/nfstest/Kconfig"
 endmenu
 endif # KDEVOPS_WORKFLOW_ENABLE_NFSTEST
 
+config KDEVOPS_WORKFLOW_ENABLE_SYSBENCH
+	bool
+	output yaml
+	default y if KDEVOPS_WORKFLOW_NOT_DEDICATED_ENABLE_SYSBENCH || KDEVOPS_WORKFLOW_DEDICATE_SYSBENCH
+
+if KDEVOPS_WORKFLOW_ENABLE_SYSBENCH
+menu "Configure and run the sysbench tests"
+source "workflows/sysbench/Kconfig"
+endmenu
+endif # KDEVOPS_WORKFLOW_ENABLE_SYSBENCH
+
 config KDEVOPS_WORKFLOW_GIT_CLONES_KDEVOPS_GIT
 	bool
 	default y if KDEVOPS_WORKFLOW_ENABLE_FSTESTS || KDEVOPS_WORKFLOW_ENABLE_BLKTESTS
diff --git a/playbooks/python/workflows/sysbench/sysbench-tps-compare.py b/playbooks/python/workflows/sysbench/sysbench-tps-compare.py
new file mode 100755
index 000000000000..6667e3337c6d
--- /dev/null
+++ b/playbooks/python/workflows/sysbench/sysbench-tps-compare.py
@@ -0,0 +1,115 @@
+#!/usr/bin/python3
+# SPDX-License-Identifier: copyleft-next-0.3.1
+
+# Accepts sysbench json output, lets you compare two separate runs
+
+import pandas as pd
+import matplotlib.pyplot as plt
+import re
+import json
+import argparse
+from concurrent.futures import ThreadPoolExecutor
+
+# Function to parse a line and extract time and TPS from text output
+def parse_line(line):
+    match = re.search(r'\[\s*(\d+)s\s*\].*?tps:\s*([\d.]+)', line)
+    if match:
+        time_in_seconds = int(match.group(1))
+        tps = float(match.group(2))
+        return time_in_seconds, tps
+    return None
+
+# Function to read and parse sysbench text output file
+def read_sysbench_output(file_path):
+    with open(file_path, 'r') as file:
+        lines = file.readlines()
+
+    with ThreadPoolExecutor() as executor:
+        results = list(executor.map(parse_line, lines))
+
+    return [result for result in results if result is not None]
+
+# Function to read and parse sysbench JSON output file
+def read_sysbench_json_output(file_path):
+    with open(file_path, 'r') as file:
+        data = json.load(file)
+
+    tps_data = [(stat['interval'], stat['tps']) for stat in data.get('statistics', [])]
+    return tps_data
+
+# Function to list available matplotlib themes
+def list_themes():
+    print("Available matplotlib themes:")
+    for style in plt.style.available:
+        print(style)
+
+# Main function
+def main():
+    parser = argparse.ArgumentParser(description='Compare sysbench outputs (text and JSON formats).')
+    parser.add_argument('file1', type=str, help='First sysbench output file')
+    parser.add_argument('file2', type=str, help='Second sysbench output file')
+    parser.add_argument('--json1', action='store_true', help='Specify if the first file is in JSON format')
+    parser.add_argument('--json2', action='store_true', help='Specify if the second file is in JSON format')
+    parser.add_argument('--legend1', type=str, default='innodb_doublewrite=ON', help='Legend for the first file')
+    parser.add_argument('--legend2', type=str, default='innodb_doublewrite=OFF', help='Legend for the second file')
+    parser.add_argument('--theme', type=str, default='dark_background', help='Matplotlib theme to use')
+    parser.add_argument('--list-themes', action='store_true', help='List available matplotlib themes')
+    parser.add_argument('--report-interval', type=int, default=1, help='Time interval in seconds for reporting')
+
+    args = parser.parse_args()
+
+    if args.list_themes:
+        list_themes()
+        return
+
+    plt.style.use(args.theme)
+
+    # Read and parse both sysbench output files (either JSON or text)
+    if args.json1:
+        tps_data_1 = read_sysbench_json_output(args.file1)
+    else:
+        tps_data_1 = read_sysbench_output(args.file1)
+
+    if args.json2:
+        tps_data_2 = read_sysbench_json_output(args.file2)
+    else:
+        tps_data_2 = read_sysbench_output(args.file2)
+
+    # Adjust time intervals based on the report interval
+    tps_data_1 = [(time * args.report_interval, tps) for time, tps in tps_data_1]
+    tps_data_2 = [(time * args.report_interval, tps) for time, tps in tps_data_2]
+
+    # Determine the maximum time value to decide if we need to use hours or seconds
+    max_time_in_seconds = max(max(tps_data_1, key=lambda x: x[0])[0], max(tps_data_2, key=lambda x: x[0])[0])
+    use_hours = max_time_in_seconds > 2 * 3600
+
+    # Convert times if necessary
+    if use_hours:
+        tps_data_1 = [(time / 3600, tps) for time, tps in tps_data_1]
+        tps_data_2 = [(time / 3600, tps) for time, tps in tps_data_2]
+        time_label = 'Time (hours)'
+    else:
+        time_label = 'Time (seconds)'
+
+    # Create pandas DataFrames
+    df1 = pd.DataFrame(tps_data_1, columns=[time_label, 'TPS'])
+    df2 = pd.DataFrame(tps_data_2, columns=[time_label, 'TPS'])
+
+    # Plot the TPS values
+    plt.figure(figsize=(30, 12))
+
+    plt.plot(df1[time_label], df1['TPS'], 'ro', markersize=2, label=args.legend1)
+    plt.plot(df2[time_label], df2['TPS'], 'go', markersize=2, label=args.legend2)
+
+    plt.title('Transactions Per Second (TPS) Over Time')
+    plt.xlabel(time_label)
+    plt.ylabel('TPS')
+    plt.grid(True)
+    plt.ylim(0)
+    plt.legend()
+    plt.tight_layout()
+    plt.savefig('a_vs_b.png')
+    #plt.show()
+
+if __name__ == '__main__':
+    main()
diff --git a/playbooks/python/workflows/sysbench/sysbench-tps-plot.py b/playbooks/python/workflows/sysbench/sysbench-tps-plot.py
new file mode 100755
index 000000000000..754e85d484a1
--- /dev/null
+++ b/playbooks/python/workflows/sysbench/sysbench-tps-plot.py
@@ -0,0 +1,92 @@
+#!/usr/bin/python3
+# SPDX-License-Identifier: copyleft-next-0.3.1
+
+# Accepts sysbench json output and provides a plot
+
+import pandas as pd
+import matplotlib.pyplot as plt
+import re
+import json
+import argparse
+from concurrent.futures import ThreadPoolExecutor
+
+# Function to parse a line and extract time and TPS from text
+def parse_line(line):
+    match = re.search(r'\[\s*(\d+)s\s*\].*?tps:\s*([\d.]+)', line)
+    if match:
+        time_in_seconds = int(match.group(1))
+        tps = float(match.group(2))
+        return time_in_seconds, tps
+    return None
+
+# Function to parse sysbench JSON output
+def parse_sysbench_json(data):
+    tps_data = [(stat['interval'], stat['tps']) for stat in data['statistics']]
+    return tps_data
+
+# Main function to handle input and generate TPS plot
+def main():
+    # Setup argument parser
+    parser = argparse.ArgumentParser(description="Generate TPS plot from sysbench output")
+    parser.add_argument('input_file', type=str, help="Path to the input file (text or JSON)")
+    parser.add_argument('--json', action='store_true', help="Specify if the input file is JSON")
+    parser.add_argument('--output', type=str, default='tps_over_time.png', help="Output image file (default: tps_over_time.png)")
+
+    # Parse arguments
+    args = parser.parse_args()
+
+    # Read the input file
+    try:
+        with open(args.input_file, 'r') as file:
+            if args.json:
+                # Parse sysbench JSON output
+                data = json.load(file)
+                tps_data = parse_sysbench_json(data)
+            else:
+                # Read text lines and parse them concurrently
+                lines = file.readlines()
+                with ThreadPoolExecutor() as executor:
+                    results = list(executor.map(parse_line, lines))
+                tps_data = [result for result in results if result is not None]
+    except FileNotFoundError:
+        print(f"Error: File '{args.input_file}' not found.")
+        exit(1)
+    except json.JSONDecodeError:
+        print("Error: Failed to decode JSON input.")
+        exit(1)
+
+    # Check if we got any data
+    if not tps_data:
+        print("Error: No valid TPS data found in the input file.")
+        exit(1)
+
+    # Determine if we need to use hours or seconds based on the maximum time value
+    max_time_in_seconds = max(tps_data, key=lambda x: x[0])[0]
+    use_hours = max_time_in_seconds > 2 * 3600
+
+    # Convert times if necessary
+    if use_hours:
+        tps_data = [(time / 3600, tps) for time, tps in tps_data]
+        time_label = 'Time (hours)'
+    else:
+        time_label = 'Time (seconds)'
+
+    # Create a pandas DataFrame
+    df = pd.DataFrame(tps_data, columns=[time_label, 'TPS'])
+
+    # Plot the TPS values
+    plt.figure(figsize=(30, 12))
+    plt.plot(df[time_label], df['TPS'], 'o', markersize=2)
+    plt.title('Transactions Per Second (TPS) Over Time')
+    plt.xlabel(time_label)
+    plt.ylabel('TPS')
+    plt.grid(True)
+    plt.ylim(0)
+    plt.tight_layout()
+
+    # Save the plot to the output file
+    plt.savefig(args.output)
+    print(f"TPS plot saved to {args.output}")
+
+if __name__ == '__main__':
+    main()
diff --git a/playbooks/python/workflows/sysbench/sysbench-tps-variance.py b/playbooks/python/workflows/sysbench/sysbench-tps-variance.py
new file mode 100755
index 000000000000..ab86e52ff751
--- /dev/null
+++ b/playbooks/python/workflows/sysbench/sysbench-tps-variance.py
@@ -0,0 +1,108 @@
+#!/usr/bin/python3
+# SPDX-License-Identifier: copyleft-next-0.3.1
+
+# Accepts sysbench json output and outputs TPS variability graphs.
+
+import re
+import json
+import numpy as np
+import matplotlib.pyplot as plt
+import seaborn as sns
+import argparse
+from scipy.stats import norm
+
+# Function to extract TPS values from text file
+def extract_tps(filename):
+    tps_values = []
+    with open(filename, 'r') as file:
+        for line in file:
+            match = re.search(r'tps: (\d+\.\d+)', line)
+            if match:
+                tps_values.append(float(match.group(1)))
+    return tps_values
+
+# Function to extract TPS values from sysbench JSON output
+def extract_tps_json(filename):
+    tps_values = []
+    with open(filename, 'r') as file:
+        data = json.load(file)
+        for stat in data.get('statistics', []):
+            tps_values.append(stat.get('tps'))
+    return tps_values
+
+# Function to analyze TPS values (mean, median, std, variance)
+def analyze_tps(tps_values):
+    mean_tps = np.mean(tps_values)
+    median_tps = np.median(tps_values)
+    std_tps = np.std(tps_values)
+    variance_tps = np.var(tps_values)
+    return mean_tps, median_tps, std_tps, variance_tps
+
+# Function to print TPS statistics
+def print_statistics(label, tps_values):
+    mean_tps, median_tps, std_tps, variance_tps = analyze_tps(tps_values)
+    print(f'{label} Statistics:')
+    print(f'Mean TPS: {mean_tps:.2f}')
+    print(f'Median TPS: {median_tps:.2f}')
+    print(f'Standard Deviation of TPS: {std_tps:.2f}')
+    print(f'Variance of TPS: {variance_tps:.2f}\n')
+
+# Function to plot histograms
+def plot_histograms(tps_values1, tps_values2, legend1, legend2, color1, color2):
+    plt.figure(figsize=(20, 12))
+    bins = np.linspace(min(min(tps_values1), min(tps_values2)), max(max(tps_values1), max(tps_values2)), 30)
+    plt.hist(tps_values1, bins=bins, alpha=0.5, color=color1, edgecolor='black', label=legend1)
+    if tps_values2:
+        plt.hist(tps_values2, bins=bins, alpha=0.5, color=color2, edgecolor='black', label=legend2)
+    plt.title('Distribution of TPS Values')
+    plt.xlabel('Transactions Per Second (TPS)')
+    plt.ylabel('Frequency')
+    plt.legend(loc='best')
+    plt.grid(True)
+    plt.savefig('histogram.png')
+    plt.show()
+
+# Other plot functions omitted for brevity, similar to original
+
+# Main function for argument parsing and running the analysis
+def main():
+    parser = argparse.ArgumentParser(description='Analyze and compare TPS values from sysbench output files.')
+    parser.add_argument('file1', help='First TPS file')
+    parser.add_argument('legend1', help='Legend for the first TPS file')
+    parser.add_argument('file2', nargs='?', default=None, help='Second TPS file (optional)')
+    parser.add_argument('legend2', nargs='?', default=None, help='Legend for the second TPS file (optional)')
+    parser.add_argument('--json1', action='store_true', help='Specify if the first file is in JSON format')
+    parser.add_argument('--json2', action='store_true', help='Specify if the second file is in JSON format (optional)')
+    parser.add_argument('--color1', default='cyan', help='Color for the first dataset (default: cyan)')
+    parser.add_argument('--color2', default='orange', help='Color for the second dataset (default: orange)')
+
+    args = parser.parse_args()
+
+    plt.style.use('dark_background')  # Set the dark theme
+
+    # Extract TPS data for file1 (JSON or text)
+    if args.json1:
+        tps_values1 = extract_tps_json(args.file1)
+    else:
+        tps_values1 = extract_tps(args.file1)
+
+    # Extract TPS data for file2 if provided (JSON or text)
+    if args.file2:
+        if args.json2:
+            tps_values2 = extract_tps_json(args.file2)
+        else:
+            tps_values2 = extract_tps(args.file2)
+    else:
+        tps_values2 = None
+
+    print_statistics(args.legend1, tps_values1)
+    if tps_values2:
+        print_statistics(args.legend2, tps_values2)
+
+    # Plot histograms
+    plot_histograms(tps_values1, tps_values2, args.legend1, args.legend2 if args.legend2 else '', args.color1, args.color2)
+
+    # Other plots omitted for brevity
+
+if __name__ == '__main__':
+    main()
diff --git a/playbooks/roles/gen_hosts/tasks/main.yml b/playbooks/roles/gen_hosts/tasks/main.yml
index a4dd1f5a0fc3..87fa4f58e2b5 100644
--- a/playbooks/roles/gen_hosts/tasks/main.yml
+++ b/playbooks/roles/gen_hosts/tasks/main.yml
@@ -261,3 +261,58 @@
     - kdevops_workflows_dedicated_workflow
     - kdevops_workflow_enable_selftests
     - ansible_hosts_template.stat.exists
+
+- name: Collect dynamically supported filesystems
+  vars:
+    supported_filesystems_variables: "{{ vars | dict2items | selectattr('key', 'search', '^sysbench_supported_filesystem_') }}"
+    supported_filesystems: "{{ supported_filesystems_variables | selectattr('value', 'eq', True) | map(attribute='key') | map('regex_replace', '^sysbench_supported_filesystem_', '') | list }}"
+  set_fact:
+    sysbench_enabled_filesystems: "{{ supported_filesystems }}"
+    enabled_sysbench_tests: "{{ [] }}"
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_sysbench
+
+- name: Collect enabled sysbench target sections for dynamically supported filesystems
+  loop: "{{ sysbench_enabled_filesystems }}"
+  loop_control:
+    loop_var: fs
+  vars:
+    fs_section_prefix: "sysbench_{{ fs }}_section_"
+    fs_section_variables: "{{ vars | dict2items | selectattr('key', 'search', '^' + fs_section_prefix) }}"
+    enabled_fs_sysbench: "{{ fs_section_variables | selectattr('value', 'eq', True) | map(attribute='key') | list }}"
+    enabled_fs_sections: "{{ enabled_fs_sysbench | map('regex_replace', 'sysbench_', '') }}"
+    enabled_fs: "{{ enabled_fs_sections | map('regex_replace', 'section_', '') }}"
+    enabled_fs_node: "{{ enabled_fs | map('regex_replace', '_', '-') }}"
+  set_fact:
+    enabled_sysbench_tests: "{{ enabled_sysbench_tests + enabled_fs_node }}"
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_sysbench
+
+- name: Generate the Ansible hosts file for a dedicated sysbench setup
+  tags: [ 'hosts' ]
+  template:
+    src: "{{ kdevops_hosts_template }}"
+    dest: "{{ topdir_path }}/{{ kdevops_hosts }}"
+    force: yes
+    trim_blocks: True
+    lstrip_blocks: True
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_sysbench
+    - ansible_hosts_template.stat.exists
+
+- name: Verify if final host file exists
+  stat:
+    path: "{{ topdir_path }}/{{ kdevops_hosts }}"
+  register: final_hosts_file
+
+- name: Fail if the dedicated workflow has no rules for node configuration for hosts file configuration
+  tags: [ 'hosts' ]
+  fail:
+    msg: "Your dedicated workflow lacks rules for what nodes to use, go work on allowed topologies to parallelize testing one per node"
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - ansible_hosts_template.stat.exists
+    - not final_hosts_file.stat.exists
diff --git a/playbooks/roles/gen_hosts/templates/sysbench.j2 b/playbooks/roles/gen_hosts/templates/sysbench.j2
new file mode 100644
index 000000000000..76503ab76176
--- /dev/null
+++ b/playbooks/roles/gen_hosts/templates/sysbench.j2
@@ -0,0 +1,23 @@
+[all]
+{% for test_type in enabled_sysbench_tests %}
+{{ kdevops_host_prefix }}-{{ test_type }}
+{% if kdevops_baseline_and_dev %}
+{{ kdevops_host_prefix }}-{{ test_type }}-dev
+{% endif %}
+{% endfor %}
+[all:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+[baseline]
+{% for test_type in enabled_sysbench_tests %}
+{{ kdevops_host_prefix }}-{{ test_type }}
+{% endfor %}
+[baseline:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
+[dev]
+{% if kdevops_baseline_and_dev %}
+  {% for test_type in enabled_sysbench_tests %}
+{{ kdevops_host_prefix }}-{{ test_type }}-dev
+  {% endfor %}
+{% endif %}
+[dev:vars]
+ansible_python_interpreter =  "{{ kdevops_python_interpreter }}"
diff --git a/playbooks/roles/gen_nodes/defaults/main.yml b/playbooks/roles/gen_nodes/defaults/main.yml
index 9955135594cb..00971e293d12 100644
--- a/playbooks/roles/gen_nodes/defaults/main.yml
+++ b/playbooks/roles/gen_nodes/defaults/main.yml
@@ -103,6 +103,8 @@ bootlinux_9p_driver: "virtio-9p-pci"
 
 guestfs_requires_uefi: False
 
+kdevops_workflow_enable_sysbench: False
+
 pcie_passthrough_enable: False
 pcie_passthrough_target_type_first_guest: False
 pcie_passthrough_target_type_all_one_guest_name: False
diff --git a/playbooks/roles/gen_nodes/tasks/main.yml b/playbooks/roles/gen_nodes/tasks/main.yml
index 247cbfcf3e94..49c544bd172f 100644
--- a/playbooks/roles/gen_nodes/tasks/main.yml
+++ b/playbooks/roles/gen_nodes/tasks/main.yml
@@ -400,6 +400,78 @@
     - kdevops_workflow_enable_selftests
     - ansible_nodes_template.stat.exists
 
+- name: Collect dynamically supported filesystems
+  vars:
+    supported_filesystems_variables: "{{ vars | dict2items | selectattr('key', 'search', '^sysbench_supported_filesystem_') }}"
+    supported_filesystems: "{{ supported_filesystems_variables | selectattr('value', 'eq', True) | map(attribute='key') | map('regex_replace', '^sysbench_supported_filesystem_', '') | list }}"
+  set_fact:
+    sysbench_enabled_filesystems: "{{ supported_filesystems }}"
+    enabled_sysbench_tests: "{{ [] }}"
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_sysbench
+
+- name: Collect enabled sysbench target sections for dynamically supported filesystems
+  loop: "{{ sysbench_enabled_filesystems }}"
+  loop_control:
+    loop_var: fs
+  vars:
+    fs_section_prefix: "sysbench_{{ fs }}_section_"
+    fs_section_variables: "{{ vars | dict2items | selectattr('key', 'search', '^' + fs_section_prefix) }}"
+    enabled_fs_sysbench: "{{ fs_section_variables | selectattr('value', 'eq', True) | map(attribute='key') | list }}"
+    enabled_fs_sections: "{{ enabled_fs_sysbench | map('regex_replace', '^sysbench_', '') }}"
+    enabled_fs: "{{ enabled_fs_sections | map('regex_replace', 'section_', '') }}"
+    prefixed_fs: "{{ enabled_fs | map('regex_replace', '^', kdevops_host_prefix + '-') }}"
+    enabled_fs_node: "{{ prefixed_fs | map('regex_replace', '_', '-') }}"
+  set_fact:
+    enabled_sysbench_tests: "{{ enabled_sysbench_tests + enabled_fs_node }}"
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_sysbench
+
+- name: Augment sysbench targets with dev nodes
+  loop: "{{ sysbench_enabled_filesystems }}"
+  loop_control:
+    loop_var: fs
+  vars:
+    fs_section_prefix: "sysbench_{{ fs }}_section_"
+    fs_section_variables: "{{ vars | dict2items | selectattr('key', 'search', '^' + fs_section_prefix) }}"
+    enabled_fs_sysbench: "{{ fs_section_variables | selectattr('value', 'eq', True) | map(attribute='key') | list }}"
+    enabled_fs_sections: "{{ enabled_fs_sysbench | map('regex_replace', '^sysbench_', '') }}"
+    enabled_fs: "{{ enabled_fs_sections | map('regex_replace', 'section_', '') }}"
+    prefixed_and_postfixed_fs: "{{ enabled_fs | map('regex_replace', '^', kdevops_host_prefix + '-') | map('regex_replace', '$', '-dev') }}"
+    enabled_fs_node: "{{ prefixed_and_postfixed_fs | map('regex_replace', '_', '-') }}"
+  set_fact:
+    enabled_sysbench_tests: "{{ enabled_sysbench_tests + enabled_fs_node }}"
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_sysbench
+    - kdevops_baseline_and_dev
+
+- name: Fail if no sysbench tests are enabled
+  fail:
+    msg: "No sysbench tests are enabled. You should enable at least one."
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_sysbench
+    - ansible_nodes_template.stat.exists
+    - enabled_sysbench_tests | length == 0
+
+- name: Generate the sysbench kdevops nodes file using {{ kdevops_nodes_template }} as jinja2 source template
+  tags: [ 'hosts' ]
+  vars:
+    node_template: "{{ kdevops_nodes_template | basename }}"
+    nodes: "{{ enabled_sysbench_tests }}"
+    all_generic_nodes: "{{ enabled_sysbench_tests }}"
+  template:
+    src: "{{ node_template }}"
+    dest: "{{ topdir_path }}/{{ kdevops_nodes }}"
+    force: yes
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - kdevops_workflow_enable_sysbench
+    - ansible_nodes_template.stat.exists
+
 - name: Get the control host's timezone
   ansible.builtin.command: "timedatectl show -p Timezone --value"
   register: kdevops_host_timezone
@@ -448,6 +520,20 @@
     - vagrant_template.stat.exists
     - not vagrant_dest.stat.exists
 
+- name: Verify if dedicated workflow defined a custom nodes template and the final file exists {{ kdevops_nodes_template_full_path }}
+  stat:
+    path: "{{ topdir_path }}/{{ kdevops_nodes }}"
+  register: dedicated_nodes_template
+
+- name: Fail if the dedicated workflow has no rules for node configuration
+  tags: [ 'nodes' ]
+  fail:
+    msg: "Your dedicated workflow lacks rules for what nodes to use, go work on allowed topologies to parallelize testing one per node"
+  when:
+    - kdevops_workflows_dedicated_workflow
+    - ansible_nodes_template.stat.exists
+    - not dedicated_nodes_template.stat.exists
+
 - name: Import list of guest nodes
   include_vars: "{{ topdir_path }}/{{ kdevops_nodes }}"
   ignore_errors: yes
diff --git a/playbooks/roles/sysbench/defaults/main.yml b/playbooks/roles/sysbench/defaults/main.yml
new file mode 100644
index 000000000000..32f669b10c4f
--- /dev/null
+++ b/playbooks/roles/sysbench/defaults/main.yml
@@ -0,0 +1,52 @@
+---
+sysbench_db_type: "mysql"
+sysbench_db_type_mysql: false
+sysbench_docker: false
+sysbench_type_mysql_docker: False
+sysbench_disk_setup_env: ""
+
+sysbench_fs_sector_size: "512"
+sysbench_device: "/dev/null"
+sysbench_fstype: "xfs"
+sysbench_fs_env: ""
+sysbench_label: "sysbench_db"
+sysbench_fs_opts: ""
+sysbench_mount_opts: ""
+sysbench_mnt: "/db"
+
+sysbench_mysql_table_engine: "innodb"
+
+sysbench_test_atomics: False
+sysbench_test_atomics_tps_variability: False
+
+sysbench_sectsize_size_env: ""
+
+sysbench_mysql_container_image_string: "mysql:8.0"
+sysbench_mysql_container_name: "mysql-sysbench"
+sysbench_mysql_container_python_path: "/usr/local/lib/python3.9/site-packages"
+sysbench_mysql_container_host_config_path: "/data/mysql.conf"
+sysbench_mysql_container_config: "/etc/mysql/conf.d/mysql.cnf"
+sysbench_mysql_container_db_path: "/var/lib/mysql"
+sysbench_mysql_container_host_root_path: "/data/myql-container-root"
+
+sysbench_container_image_name: "severalnines/sysbench"
+sysbench_container_populate_name: "sysbench-populate"
+sysbench_container_run_benchmark: "sysbench-run"
+sysbench_local_db_port: 9901
+
+sysbench_root_db_password: "kdevops"
+sysbench_db_password: "kdevops"
+
+sysbench_cnf: "/etc/sysbench/conf.d/sysbench.cnf"
+sysbench_dir: "/var/lib/sysbench"
+
+sysbench_oltp_table_size: 100000
+sysbench_oltp_table_count: 24
+sysbench_report_interval: 2
+sysbench_threads: 128
+
+sysbench_telemetry_path: "/data/sysbench-telemetry"
+sysbench_docker_telemetry_path: "/data/sysbench-telemetry"
+
+sysbench_disable_doublewrite_auto: False
+sysbench_disable_doublewrite_always: False
diff --git a/playbooks/roles/sysbench/tasks/install-deps/debian/main.yml b/playbooks/roles/sysbench/tasks/install-deps/debian/main.yml
new file mode 100644
index 000000000000..90837a167eab
--- /dev/null
+++ b/playbooks/roles/sysbench/tasks/install-deps/debian/main.yml
@@ -0,0 +1,31 @@
+---
+- name: Import optional extra_args file
+  include_vars: "{{ item }}"
+  ignore_errors: yes
+  with_first_found:
+    - files:
+      - "../extra_vars.yml"
+      - "../extra_vars.yaml"
+      - "../extra_vars.json"
+      skip: true
+  tags: vars
+
+- name: Update apt cache
+  become: yes
+  become_method: sudo
+  apt:
+    update_cache: yes
+  tags: deps
+
+- name: Install sysbench deps
+  become: yes
+  become_method: sudo
+  apt:
+    name:
+      - docker
+      - docker.io
+      - locales
+      - rsync
+    state: present
+    update_cache: yes
+  tags: [ 'deps' ]
diff --git a/playbooks/roles/sysbench/tasks/install-deps/main.yml b/playbooks/roles/sysbench/tasks/install-deps/main.yml
new file mode 100644
index 000000000000..4e01d57dc38b
--- /dev/null
+++ b/playbooks/roles/sysbench/tasks/install-deps/main.yml
@@ -0,0 +1,12 @@
+---
+- include_role:
+    name: pkg
+
+# tasks to install dependencies for oscheck
+- name: oscheck distribution ospecific setup
+  import_tasks: tasks/install-deps/debian/main.yml
+  when: ansible_facts['os_family']|lower == 'debian'
+- import_tasks: tasks/install-deps/suse/main.yml
+  when: ansible_facts['os_family']|lower == 'suse'
+- import_tasks: tasks/install-deps/redhat/main.yml
+  when: ansible_facts['os_family']|lower == 'redhat'
diff --git a/playbooks/roles/sysbench/tasks/install-deps/redhat/main.yml b/playbooks/roles/sysbench/tasks/install-deps/redhat/main.yml
new file mode 100644
index 000000000000..d63b66fb0d3d
--- /dev/null
+++ b/playbooks/roles/sysbench/tasks/install-deps/redhat/main.yml
@@ -0,0 +1,43 @@
+---
+- name: Enable the CodeReady repo
+  become: yes
+  command: /usr/bin/dnf config-manager --enable codeready-builder-for-rhel-{{ ansible_distribution_major_version }}-{{ ansible_architecture }}-rpms
+  when:
+    - ansible_distribution == 'RedHat'
+    - not devconfig_custom_yum_repofile
+
+- name: Enable the CodeReady repo
+  become: yes
+  command: /usr/bin/dnf config-manager --enable crb
+  when:
+    - ansible_distribution == 'CentOS'
+    - not devconfig_custom_yum_repofile
+
+- name: Install epel-release if we're not on Fedora
+  become: yes
+  become_method: sudo
+  yum:
+    update_cache: yes
+    name: "{{ packages }}"
+  retries: 3
+  delay: 5
+  register: result
+  until: result.rc == 0
+  vars:
+    packages:
+      - epel-release
+  when: ansible_distribution != "Fedora"
+
+- name: Install docker
+  become: yes
+  become_method: sudo
+  yum:
+    update_cache: yes
+    name: "{{ packages }}"
+  retries: 3
+  delay: 5
+  register: result
+  until: result.rc == 0
+  vars:
+    packages:
+      - docker
diff --git a/playbooks/roles/sysbench/tasks/install-deps/suse/main.yml b/playbooks/roles/sysbench/tasks/install-deps/suse/main.yml
new file mode 100644
index 000000000000..6a31dca6d249
--- /dev/null
+++ b/playbooks/roles/sysbench/tasks/install-deps/suse/main.yml
@@ -0,0 +1,67 @@
+---
+- name: Set generic SUSE specific distro facts
+  set_fact:
+    is_sle: '{{ (ansible_distribution == "SLES") or (ansible_distribution == "SLED") }}'
+    is_leap: '{{ "Leap" in ansible_distribution }}'
+    is_tumbleweed: '{{ "openSUSE Tumbleweed" == ansible_distribution }}'
+
+- name: Set SLE specific version labels to make checks easier
+  set_fact:
+    is_sle10: '{{ ansible_distribution_major_version == "10" }}'
+    is_sle11: '{{ ansible_distribution_major_version == "11" }}'
+    is_sle12: '{{ ansible_distribution_major_version == "12" }}'
+    is_sle15: '{{ ansible_distribution_major_version == "15" }}'
+    is_sle10sp3: '{{ ansible_distribution_version == "10.3" }}'
+    is_sle11sp1: '{{ ansible_distribution_version == "11.1" }}'
+    is_sle11sp4: '{{ ansible_distribution_version == "11.4" }}'
+    is_sle12sp1: '{{ ansible_distribution_version == "12.1" }}'
+    is_sle12sp3: '{{ ansible_distribution_version == "12.3" }}'
+    is_sle12sp5: '{{ ansible_distribution_version == "12.5" }}'
+    is_sle15sp2: '{{ ansible_distribution_version == "15.2" }}'
+    is_sle15sp3: '{{ ansible_distribution_version == "15.3" }}'
+    is_sle15sp4: '{{ ansible_distribution_version == "15.4" }}'
+  when:
+    - is_sle|bool
+
+- name: Set SLE specific version labels to make checks easier when not SLE
+  set_fact:
+    is_sle10: False
+    is_sle11: False
+    is_sle12: False
+    is_sle15: False
+    is_sle10sp3: False
+    is_sle11sp1: False
+    is_sle11sp4: False
+    is_sle12sp1: False
+    is_sle12sp3: False
+    is_sle12sp5: False
+    is_sle15sp2: False
+    is_sle15sp3: False
+    is_sle15sp4: False
+  when:
+    - not is_sle|bool
+
+- name: By default we assume we have figured out how to add repos on a release
+  set_fact:
+    repos_present: true
+
+- name: Lets us disable things which require a zypper repo present
+  set_fact:
+    repos_present: false
+  when:
+    - is_sle|bool
+    - is_sle10|bool or is_sle11|bool
+
+- name: The default is to assume all distros have the indent package
+  set_fact:
+    has_indent: True
+
+- name: Install docker tools
+  become: yes
+  become_method: sudo
+  ansible.builtin.package:
+    name:
+      - docker
+    state: present
+  when:
+    - repos_present|bool
diff --git a/playbooks/roles/sysbench/tasks/main.yaml b/playbooks/roles/sysbench/tasks/main.yaml
new file mode 100644
index 000000000000..9d67bf420404
--- /dev/null
+++ b/playbooks/roles/sysbench/tasks/main.yaml
@@ -0,0 +1,530 @@
+---
+- name: Import optional extra_args file
+  ansible.builtin.include_vars:
+    file: "{{ item }}"
+  with_first_found:
+    - files:
+        - "../extra_vars.yml"
+        - "../extra_vars.yaml"
+        - "../extra_vars.json"
+      skip: true
+  failed_when: false
+  tags: vars
+
+- name: Create a few directories which kdevops uses for sysbench if they do not exist
+  tags: [ 'mkfs' ]
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  ansible.builtin.file:
+    path: "{{ item }}"
+    state: directory
+  with_items:
+    - "{{ kdevops_data }}"
+    - "{{ sysbench_mnt }}"
+
+# Distro specific
+- name: Install dependencies
+  include_tasks: install-deps/main.yml
+
+- include_role:
+    name: create_data_partition
+  tags: [ 'mkfs' ]
+
+- name: Ensure telemetry data directory exists
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  ansible.builtin.file:
+    path: "{{ sysbench_telemetry_path }}"
+    state: directory
+    mode: "u=rwx,g=rx,o=rx"
+  when: 'sysbench_type_mysql_docker|bool'
+  tags: ['setup']
+
+- name: Ensure MySQL root user directory exists
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  ansible.builtin.file:
+    path: "{{ sysbench_mysql_container_host_root_path }}"
+    state: directory
+    mode: "u=rwx,g=rx,o=rx"
+  when: 'sysbench_type_mysql_docker|bool'
+  tags: ['setup']
+
+- name: Determine filesystem setting used and db page size
+  vars:
+    fs_type_variable: "{{ ansible_host | regex_replace('^' + kdevops_host_prefix + '-', '')  | regex_replace('-.+', '') }}"
+    fs_command_variable_simple: "sysbench_{{ ansible_host | regex_replace('^' + kdevops_host_prefix + '-', '') | regex_replace('-dev$', '') }}_cmd"
+    fs_command_variable: "{{ fs_command_variable_simple | regex_replace('-', '_') | regex_replace('^sysbench_' + fs_type_variable, fs_type_variable + '_section') }}"
+    db_page_size_simple: "sysbench_{{ ansible_host | regex_replace('^' + kdevops_host_prefix + '-', '') | regex_replace('-dev$', '') }}_db_page_size"
+    db_page_size_variable: "{{ db_page_size_simple | regex_replace('-', '_') | regex_replace('^sysbench_' + fs_type_variable, fs_type_variable + '_section') }}"
+    fs_sector_size_variable: "sysbench_{{ fs_type_variable }}_sector_size"
+    fs_cmd: "{{  lookup('vars', 'sysbench_' + fs_command_variable) }}"
+    sect_size: "{{  lookup('vars', fs_sector_size_variable) }}"
+    db_page_size: "{{  lookup('vars', 'sysbench_' + db_page_size_variable) }}"
+  set_fact:
+    filesystem_command_for_host: "{{ fs_cmd }}"
+    sysbench_fs_sector_size: "{{ sect_size }}"
+    sysbench_fstype: "{{ fs_type_variable }}"
+    sysbench_fs_opts_without_sector_size: "{{ fs_cmd | regex_replace('^[^ ]+ ', '')  }}"
+    sysbench_db_page_size: "{{ db_page_size }}"
+  tags: ['vars' ]
+
+- name: Set filesystem options for XFS with sector size
+  set_fact:
+    sysbench_fs_opts: "{{ sysbench_fs_opts_without_sector_size }} -s size={{ sysbench_fs_sector_size }} -L {{ sysbench_label }}"
+  when: sysbench_fstype != 'ext4'
+  tags: ['mkfs']
+
+- name: Set filesystem options for ext4 without sector size
+  set_fact:
+    sysbench_fs_opts: "{{ sysbench_fs_opts_without_sector_size }} -L {{ sysbench_label }}"
+  when: sysbench_fstype == 'ext4'
+  tags: ['mkfs']
+
+- name: Set environment variable for sector size for ext4
+  vars:
+  set_fact:
+    sysbench_fs_env:
+      MKE2FS_DEVICE_SECTSIZE: "{{ sysbench_fs_sector_size }}"
+  when: sysbench_fstype == 'ext4'
+  tags: ['mkfs']
+
+- name: Clear environment variable for non-ext4 filesystems
+  set_fact:
+    sysbench_fs_env: {}
+  when: sysbench_fstype != 'ext4'
+  tags: ['mkfs']
+
+- name: Display the filesystem options and environment variable for the current host
+  debug:
+    msg: |
+      Sysbench device:    {{ sysbench_device }}
+      Sysbench fstype:    {{ sysbench_fstype }}
+      Sysbench fs opts:   {{ sysbench_fs_opts }}
+      Sysbench label:     {{ sysbench_label }}
+      Sysbench mount:     {{ sysbench_mnt }}
+      Sysbench env:       {{ sysbench_fs_env }}
+  tags: ['debug']
+
+- name: Fail if no filesystem command is found for the host
+  fail:
+    msg: "No filesystem configuration command found for the current host: {{ ansible_host }}"
+  when: filesystem_command_for_host is undefined
+  tags: ['mkfs']
+
+- name: Create the filesystem we'll use to place the database under test
+  ansible.builtin.include_role:
+    name: create_partition
+  vars:
+    disk_setup_device: "{{ sysbench_device }}"
+    disk_setup_fstype: "{{ sysbench_fstype }}"
+    disk_setup_label: "{{ sysbench_label }}"
+    disk_setup_path: "{{ sysbench_mnt }}"
+    disk_setup_fs_opts: "{{ sysbench_fs_opts }}"
+    disk_setup_env: "{{ sysbench_fs_env }}"
+  tags: debug
+
+- name: Set sysbench_mysql_innodb_doublewrite based on ansible_host
+  tags: ['vars' ]
+  set_fact:
+    sysbench_mysql_innodb_doublewrite: "{{ '0' if ansible_host is search('-dev$') else '1' }}"
+  when:
+    - 'sysbench_disable_doublewrite_auto|bool'
+
+- name: Set sysbench_mysql_innodb_doublewrite based on ansible_host
+  tags: ['vars' ]
+  set_fact:
+    sysbench_mysql_innodb_doublewrite: '0'
+  when:
+    - 'sysbench_disable_doublewrite_always|bool'
+
+- name: Generate MySQL configuration file from template
+  tags: ['setup']
+  ansible.builtin.template:
+    src: "{{ sysbench_mysql_container_host_config_path | basename }}.j2"
+    dest: "{{ sysbench_mysql_container_host_config_path }}"
+    mode: "u=rw,g=r,o=r"
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Copy and template all files used for the /root/ directory on MySQL container
+  tags: ['setup']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  ansible.builtin.template:
+    src: "{{ item }}"
+    dest: "{{ sysbench_mysql_container_host_root_path }}/{{ item | basename }}"
+    mode: "u=rw,g=r,o=rwx"
+  with_fileglob:
+    - "templates/root-user/*"
+  loop_control:
+    label: "Copying {{ item | basename }} to {{ sysbench_mysql_container_host_root_path }}"
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Create a few directories needed for telemetry inside the docker container
+  tags: [ 'setup' ]
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  ansible.builtin.file:
+    path: "{{ item }}"
+    state: directory
+  with_items:
+    - "{{ sysbench_mysql_container_host_root_path }}/.mysqlsh/"
+
+- name: git clone our mysqlsh plugin for telemetry
+  tags: ['setup']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  environment:
+    GIT_SSL_NO_VERIFY:  true
+  git:
+    repo: "https://github.com/lefred/mysqlshell-plugins.git"
+    dest: "{{ sysbench_mysql_container_host_root_path }}/.mysqlsh/plugins/"
+    update: yes
+    version: master
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Get used target kernel version
+  tags: [ 'db_start' ]
+  command: "uname -r"
+  register: uname_cmd
+
+- name: Store last kernel variable
+  set_fact:
+    last_kernel: "{{ uname_cmd.stdout_lines | regex_replace('\\]') | regex_replace('\\[') | replace(\"'\",'') }}"
+  tags: ['db_start']
+  run_once: true
+
+- name: Ensure the results directory exists on the localhost
+  tags: ['db_start']
+  local_action: file
+  args:
+    path: "{{ topdir_path }}/workflows/sysbench/results/"
+    state: directory
+  run_once: true
+
+- name: Ensure the results directory exists on the localhost for each node locally
+  tags: ['db_start']
+  local_action: file
+  args:
+    path: "{{ topdir_path }}/workflows/sysbench/results/{{ inventory_hostname }}/"
+    state: directory
+
+- name: Document used target kernel version
+  local_action: "shell echo {{ last_kernel }} > {{ topdir_path }}/workflows/sysbench/results/last-kernel.txt"
+  tags: ['db_start']
+  run_once: true
+
+- name: Document double write buffer setting on node
+  local_action: "shell echo {{ sysbench_mysql_innodb_doublewrite }} > {{ topdir_path }}/workflows/sysbench/results/{{ inventory_hostname }}/innodb_doublewrite.txt"
+  tags: ['db_start']
+
+- name: Document db page size setting on node
+  local_action: "shell echo {{ sysbench_db_page_size }} > {{ topdir_path }}/workflows/sysbench/results/{{ inventory_hostname }}/innodb_page_size.txt"
+  tags: ['db_start']
+
+- name: Remove any old MySQL container first
+  tags: ['post_entrypoint']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  community.docker.docker_container:
+    name: "{{ sysbench_mysql_container_name }}"
+    image: "{{ sysbench_mysql_container_image_string }}"
+    state: absent
+  when: 'sysbench_type_mysql_docker|bool'
+
+
+- name: Start MySQL Docker container
+  tags: ['db_start']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  community.docker.docker_container:
+    name: "{{ sysbench_mysql_container_name }}"
+    image: "{{ sysbench_mysql_container_image_string }}"
+    state: started
+    restart_policy: no
+    volumes:
+      - "{{ sysbench_mysql_container_host_config_path }}:{{ sysbench_mysql_container_config }}"
+      - "{{ sysbench_mnt }}:{{ sysbench_mysql_container_db_path }}"
+      - "{{ sysbench_telemetry_path }}:{{ sysbench_docker_telemetry_path }}"
+      - "{{ sysbench_mysql_container_host_root_path }}:/root/"
+    published_ports:
+      - "{{ sysbench_local_db_port }}:3306"
+    env:
+      MYSQL_DATABASE: "{{ sysbench_db_name }}"
+      MYSQL_ROOT_PASSWORD: "{{ sysbench_db_password }}"
+      PYTHONPATH: "{{ sysbench_mysql_container_python_path }}"
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Wait for for it... (MySQL data port to be up)
+  tags: ['db_start']
+  ansible.builtin.wait_for:
+    host: localhost
+    port: "{{ sysbench_local_db_port }}"
+    timeout: 20
+    state: started
+
+- name: Remove the sysbench container
+  tags: ['post_entrypoint']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  community.docker.docker_container:
+    name: "{{ sysbench_container_name }}"
+    image: "{{ sysbench_container_image_name }}"
+    state: absent
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Start the sysbench container
+  tags: ['post_entrypoint']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  community.docker.docker_container:
+    name: "{{ sysbench_container_name }}"
+    image: "{{ sysbench_container_image_name }}"
+    state: started
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Ensure sysbench db privs are set and install python pandas and matplotlib
+  tags: ['post_entrypoint']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  community.docker.docker_container_exec:
+    container: "{{ sysbench_mysql_container_name }}"
+    command: >
+      /root/post-entrypoint-custom-bringup.sh
+  when: 'sysbench_type_mysql_docker|bool'
+
+# Keep this at threads=1 as multiple threads don't work when building the
+# initial database.
+- name: Use the sysbench container to populate the sysbench database
+  tags: ['populate_sbtest']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  community.docker.docker_container:
+    name: "{{ sysbench_container_name }}"
+    image: "{{ sysbench_container_image_name }}"
+    state: started
+    command: >
+      /usr/bin/sysbench
+      --db-driver={{ sysbench_db_type }}
+      --oltp-table-size={{ sysbench_oltp_table_size }}
+      --oltp-tables-count={{ sysbench_oltp_table_count }}
+      --threads=1
+      --mysql-host=127.0.0.1
+      --mysql-port={{ sysbench_local_db_port }}
+      --mysql-user={{ sysbench_db_username }}
+      --mysql-password={{ sysbench_db_password }}
+      /usr/share/sysbench/tests/include/oltp_legacy/parallel_prepare.lua run
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Remove the sysbench sysbench population test to ensure we sync
+  tags: ['run_sysbench']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  community.docker.docker_container:
+    name: "{{ sysbench_container_name }}"
+    image: "{{ sysbench_container_image_name }}"
+    state: absent
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Run sysbench benchmark workload against MySQL
+  tags: ['run_sysbench']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  community.docker.docker_container:
+    name: "{{ sysbench_container_name }}"
+    image: "{{ sysbench_container_image_name }}"
+    state: started
+    command: >
+      /usr/bin/sysbench
+      --db-driver={{ sysbench_db_type }}
+      --report-interval={{ sysbench_report_interval }}
+      --mysql-table-engine={{ sysbench_mysql_table_engine }}
+      --oltp-table-size={{ sysbench_oltp_table_size }}
+      --oltp-tables-count={{ sysbench_oltp_table_count }}
+      --threads={{ sysbench_threads }}
+      --time={{ sysbench_test_duration }}
+      --mysql-host=127.0.0.1
+      --mysql-port={{ sysbench_local_db_port }}
+      --mysql-user={{ sysbench_db_username }}
+      --mysql-password={{ sysbench_db_password }}
+      --json-output
+      /usr/share/sysbench/tests/include/oltp_legacy/oltp.lua run
+  async: "{{ sysbench_test_duration | int + 10 }}" # Maximum allowed time to complete
+  poll: 0 # Run in the background
+  register: sysbench_job # Register the job ID
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Collect MySQL telemetry inside the Docker MySQL container at the same time
+  tags: ['telemetry']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  community.docker.docker_container_exec:
+    container: "{{ sysbench_mysql_container_name }}"
+    env:
+      MYSQL_DATABASE: "{{ sysbench_db_name }}"
+      MYSQL_ROOT_PASSWORD: "{{ sysbench_db_password }}"
+      PYTHONPATH: "{{ sysbench_mysql_container_python_path }}"
+    command: >
+      mysqlsh -u root -p{{ sysbench_root_db_password }} --execute
+      "support.collect(mysql=true, os=true, time={{ sysbench_test_duration | int // 60 }}, outputdir='{{ sysbench_telemetry_path }}')"
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Wait for sysbench workload to complete
+  tags: ['run_sysbench']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  async_status:
+    jid: "{{ sysbench_job.ansible_job_id }}"
+  register: sysbench_result
+  until: sysbench_result.finished
+  retries: "{{ sysbench_test_duration | int // 60 }}"  # Retries every minute
+  delay: 60  # Delay between retries (in seconds)
+
+- name: Move sysbench async results file to telemetry
+  tags: ['run_sysbench']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  command: mv "{{ sysbench_result.results_file }}" "{{ sysbench_telemetry_path }}/sysbench_output.json"
+
+- name: Fetch sysbench container logs
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  tags: ['run_sysbench']
+  ansible.builtin.shell:
+    cmd: "docker logs {{ sysbench_container_name }}"
+  register: sysbench_logs
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Save sysbench logs to a file on the local machine
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  tags: ['run_sysbench']
+  copy:
+    content: "{{ sysbench_logs.stdout }}"
+    dest: "{{ sysbench_telemetry_path }}/docker-sysbench-results-{{ ansible_date_time.iso8601 }}.log"
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Collect sysbench docker logs for MySQL container
+  tags: ['logs']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  ansible.builtin.shell:
+    cmd: "docker logs {{ sysbench_mysql_container_name }}"
+  register: sysbench_mysql_container_logs
+  changed_when: false
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Save docker MySQL logs on node
+  tags: ['logs']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  ansible.builtin.copy:
+    content: "{{ sysbench_mysql_container_logs.stdout }}"
+    dest: "{{ sysbench_telemetry_path}}/docker-mysql-results-{{ ansible_date_time.iso8601 }}.log"
+    mode: "u=rw,g=r,o=r"
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Remove the sysbench container which ran the benchmark
+  tags: ['run_sysbench']
+  become: yes
+  become_flags: 'su - -c'
+  become_method: sudo
+  community.docker.docker_container:
+    name: "{{ sysbench_container_name }}"
+    image: "{{ sysbench_container_image_name }}"
+    state: absent
+  when: 'sysbench_type_mysql_docker|bool'
+
+- name: Copy telemetry data from each node to the localhost
+  tags: ['results']
+  synchronize:
+    src: "{{ sysbench_telemetry_path }}/"
+    dest: "{{ topdir_path }}/workflows/sysbench/results/{{ inventory_hostname }}/"
+    mode: pull
+    recursive: yes
+    rsync_opts:
+      - "--ignore-existing"
+  delegate_to: localhost
+  become: false
+
+- name: Gather kernel logs from each node
+  tags: ['results']
+  become: yes
+  become_method: sudo
+  command: journalctl -k
+  register: journal_cmd
+
+- name: Save kernel logs to local file per node
+  copy:
+    content: "{{ journal_cmd.stdout }}"
+    dest: "{{ topdir_path }}/workflows/sysbench/results/{{ inventory_hostname }}/dmesg.txt"
+  delegate_to: localhost
+  tags: ['results']
+
+- name: Gather memory fragmentation index on each node
+  tags: ['results']
+  become: yes
+  become_method: sudo
+  command: cat /sys/kernel/debug/extfrag/extfrag_index
+  register: extfrag_index_cmd
+
+- name: Save memory fragmentation index per node
+  copy:
+    content: "{{ extfrag_index_cmd.stdout }}"
+    dest: "{{ topdir_path }}/workflows/sysbench/results/{{ inventory_hostname }}/extfrag_index.txt"
+  delegate_to: localhost
+  tags: ['results']
+
+- name: Gather memory unusable index on each node
+  tags: ['results']
+  become: yes
+  become_method: sudo
+  command: cat /sys/kernel/debug/extfrag/unusable_index
+  register: unusable_index_cmd
+
+- name: Save memory memory unusable index per node
+  copy:
+    content: "{{ unusable_index_cmd.stdout }}"
+    dest: "{{ topdir_path }}/workflows/sysbench/results/{{ inventory_hostname }}/unusable_index.txt"
+  delegate_to: localhost
+  tags: ['results']
+
+- name: Remove all results and telemetry directories on the node
+  become: yes
+  file:
+    path: "{{ item }}"
+    state: absent
+  loop:
+    - "{{ sysbench_telemetry_path }}/results"
+  tags: ['clean']
+
+- name: Remove all results and telemetry directories on the host
+  become: yes
+  file:
+    path: "{{ item }}"
+    state: absent
+  loop:
+    - "{{ topdir_path }}/workflows/sysbench/results/"
+  delegate_to: localhost
+  tags: ['clean']
diff --git a/playbooks/roles/sysbench/templates/mysql.conf.j2 b/playbooks/roles/sysbench/templates/mysql.conf.j2
new file mode 100644
index 000000000000..f00fa40eb0f6
--- /dev/null
+++ b/playbooks/roles/sysbench/templates/mysql.conf.j2
@@ -0,0 +1,94 @@
+[mysqld]
+# Testing for disabling innodb_doublewrite
+
+# We use the default
+# datadir                         = /var/lib/mysql/
+port                            = 3306
+performance_schema              = OFF
+max_prepared_stmt_count         = 128000
+character_set_server            = latin1
+collation_server                = latin1_swedish_ci
+transaction_isolation           = REPEATABLE-READ
+default_storage_engine          = InnoDB
+disable_log_bin
+skip_external_locking
+skip_name_resolve
+default-authentication-plugin=mysql_native_password
+
+# InnoDB Settings
+#
+# innodb_dedicated_server=ON is not compatible with innodb_flush_method=O_DIRECT
+# and so we do what we can. See the following values which MySQL does
+# recommand use when dedicated server is enabled, we just have to compute
+# on our own and test on our own:
+#
+# https://dev.mysql.com/doc/refman/8.0/en/innodb-dedicated-server.html
+#
+# Let us assume n1-standard-16 with 60GB RAM or AWS i4i.4xlarge with 128GB.
+# The recommended values seem very large in consideration for when we disable
+# innodb_doublewrite and use O_DIRECT, so likely could be adjusted. We strive
+# to provide generic configurations for these types of instances in this
+# example file for innodb_doublewrite=0.
+#
+# For systems with above 10 GiB RAM: 0.5625 * (RAM in GB)
+# n1-standard-16: 33.75G
+# i4i.4xlarge: 72G
+innodb_redo_log_capacity        = {{ sysbench_mysql_innoodb_redo_log_capacity }}
+
+# For systems with above 4 GiB RAM: 0.75 * (RAM in GB)
+# n1-standard-16: 45G
+# i4i.4xlarge: 96G
+innodb_buffer_pool_size         = {{ sysbench_mysql_innodb_buffer_pool_size }}
+
+# Take advantage of NVMe AWUPF >= 4k
+innodb_flush_method             = O_DIRECT
+innodb_page_size                = {{ sysbench_db_page_size }}
+innodb_doublewrite              = {{ sysbench_mysql_innodb_doublewrite }}
+
+innodb_file_per_table           = 1
+innodb_flush_log_at_trx_commit  = 0
+innodb_open_files               = 2000
+innodb_stats_on_metadata        = 0
+innodb_thread_concurrency       = 14
+
+innodb_max_dirty_pages_pct      = 90
+innodb_max_dirty_pages_pct_lwm  = 10
+innodb_use_native_aio           = 1
+innodb_stats_persistent         = 1
+innodb_spin_wait_delay          = 6
+innodb_max_purge_lag_delay      = 300000
+innodb_max_purge_lag            = 0
+innodb_checksum_algorithm       = none
+innodb_io_capacity              = 12000
+innodb_io_capacity_max          = 20000
+innodb_lru_scan_depth           = 9000
+innodb_change_buffering         = none
+innodb_read_only                = 0
+innodb_page_cleaners            = 4
+innodb_undo_log_truncate        = off
+innodb_read_io_threads          = 64
+innodb_write_io_threads         = 64
+innodb_adaptive_flushing        = 1
+innodb_flush_neighbors          = 0
+innodb_purge_threads            = 4
+innodb_adaptive_hash_index      = 0
+
+# Connection Settings
+max_connections                 = 4000
+table_open_cache                = 8000
+table_open_cache_instances      = 16
+back_log                        = 1500
+thread_cache_size               = 100
+thread_stack                    = 192K
+
+# Buffer Settings
+join_buffer_size                = 64M
+read_buffer_size                = 48M
+read_rnd_buffer_size            = 64M
+sort_buffer_size                = 64M
+
+# Search Settings
+ft_min_word_len                 = 3
+
+# Monitoring
+innodb_monitor_enable='%'
diff --git a/playbooks/roles/sysbench/templates/root-user/0001-setup-sysbench-permissions.sh b/playbooks/roles/sysbench/templates/root-user/0001-setup-sysbench-permissions.sh
new file mode 100755
index 000000000000..4711c244afc4
--- /dev/null
+++ b/playbooks/roles/sysbench/templates/root-user/0001-setup-sysbench-permissions.sh
@@ -0,0 +1,3 @@
+#!/bin/bash
+mysql -u root -p{{ sysbench_db_password }} -e \
+	"CREATE USER {{ sysbench_db_name }}@'%' IDENTIFIED BY '{{ sysbench_root_db_password }}'; GRANT ALL PRIVILEGES ON {{ sysbench_db_name }}.* to {{ sysbench_db_username }}@'%';"
diff --git a/playbooks/roles/sysbench/templates/root-user/0002-install-mysql-shell-plugin-reqs.sh b/playbooks/roles/sysbench/templates/root-user/0002-install-mysql-shell-plugin-reqs.sh
new file mode 100755
index 000000000000..961de4e601f4
--- /dev/null
+++ b/playbooks/roles/sysbench/templates/root-user/0002-install-mysql-shell-plugin-reqs.sh
@@ -0,0 +1,3 @@
+#!/bin/bash
+microdnf install -y python-pip
+pip install pandas matplotlib
diff --git a/playbooks/roles/sysbench/templates/root-user/post-entrypoint-custom-bringup.sh b/playbooks/roles/sysbench/templates/root-user/post-entrypoint-custom-bringup.sh
new file mode 100755
index 000000000000..f67b0b642c5e
--- /dev/null
+++ b/playbooks/roles/sysbench/templates/root-user/post-entrypoint-custom-bringup.sh
@@ -0,0 +1,11 @@
+#!/bin/bash
+
+for i in /root/*.sh; do
+        BASENAME=$(basename "$i")
+        if [[ ! "$BASENAME" =~ ^[0-9] ]]; then
+                continue
+        fi
+        if [[ -x "$i" ]]; then
+                "$i"
+        fi
+done
diff --git a/playbooks/sysbench.yml b/playbooks/sysbench.yml
new file mode 100644
index 000000000000..61026356edc7
--- /dev/null
+++ b/playbooks/sysbench.yml
@@ -0,0 +1,4 @@
+---
+- hosts: all
+  roles:
+    - role: sysbench
diff --git a/workflows/Makefile b/workflows/Makefile
index 34afb9af3f9b..d2b31d1a4e92 100644
--- a/workflows/Makefile
+++ b/workflows/Makefile
@@ -50,6 +50,10 @@ ifeq (y,$(CONFIG_KDEVOPS_WORKFLOW_ENABLE_NFSTEST))
 include workflows/nfstest/Makefile
 endif # CONFIG_KDEVOPS_WORKFLOW_ENABLE_NFSTEST == y
 
+ifeq (y,$(CONFIG_KDEVOPS_WORKFLOW_ENABLE_SYSBENCH))
+include workflows/sysbench/Makefile
+endif # CONFIG_KDEVOPS_WORKFLOW_ENABLE_SYSBENCH == y
+
 ANSIBLE_EXTRA_ARGS += $(WORKFLOW_ARGS)
 ANSIBLE_EXTRA_ARGS_SEPARATED += $(WORKFLOW_ARGS_SEPARATED)
 ANSIBLE_EXTRA_ARGS_DIRECT += $(WORKFLOW_ARGS_DIRECT)
diff --git a/workflows/sysbench/Kconfig b/workflows/sysbench/Kconfig
new file mode 100644
index 000000000000..b5111f18afd9
--- /dev/null
+++ b/workflows/sysbench/Kconfig
@@ -0,0 +1,127 @@
+config SYSBENCH_DB_TYPE_MYSQL
+	bool
+	output yaml
+
+source "workflows/sysbench/Kconfig.fs"
+
+choice
+	prompt "What type of sysbench testing do you want to use"
+	default SYSBENCH_DOCKER
+
+config SYSBENCH_DOCKER
+	bool "Run Sysbench in Docker Container"
+	output yaml
+	select SYSBENCH_TYPE_MYSQL_DOCKER
+	help
+	  Run sysbench inside Docker containers.
+
+endchoice
+
+if SYSBENCH_DOCKER
+source "workflows/sysbench/Kconfig.docker"
+endif # SYSBENCH_DOCKER
+
+config SYSBENCH_DB_TYPE
+	string
+	output yaml
+	default "mysql" if SYSBENCH_DB_TYPE_MYSQL
+
+if SYSBENCH_DB_TYPE_MYSQL
+
+config SYSBENCH_MYSQL_TABLE_ENGINE
+	string "Sysbench MySQL table engine to use"
+	output yaml
+	default "innodb" if SYSBENCH_TYPE_MYSQL_DOCKER
+
+config SYSBENCH_MYSQL_INNOODB_REDO_LOG_CAPACITY
+	string "innodb_redo_log_capacity"
+	output yaml
+	default "1G" if SYSBENCH_TYPE_MYSQL_DOCKER
+	help
+	  innodb_dedicated_server=ON is not compatible with innodb_flush_method=O_DIRECT
+	  and so we do what we can. See the following values which MySQL does
+	  recommand use when dedicated server is enabled, we just have to compute
+	  on our own and test on our own:
+
+	  https://dev.mysql.com/doc/refman/8.0/en/innodb-dedicated-server.html
+
+	  Let us assume n1-standard-16 with 60GB RAM or AWS i4i.4xlarge with 128GB.
+	  The recommended values seem very large in consideration for when we disable
+	  innodb_doublewrite and use O_DIRECT, so likely could be adjusted. We strive
+	  to provide generic configurations for these types of instances in this
+	  example file for innodb_doublewrite=0.
+
+	  For systems with above 10 GiB RAM: 0.5625 * (RAM in GB)
+	  n1-standard-16: 33.75G
+	  i4i.4xlarge: 72G
+
+config SYSBENCH_MYSQL_INNODB_BUFFER_POOL_SIZE
+	string "innodb_buffer_pool_size"
+	output yaml
+	default "512M" if SYSBENCH_TYPE_MYSQL_DOCKER
+	help
+	  For systems with above 4 GiB RAM: 0.75 * (RAM in GB)
+	  n1-standard-16: 45G
+	  i4i.4xlarge: 96G
+
+endif # SYSBENCH_DB_TYPE_MYSQL
+
+config SYSBENCH_DB_NAME
+	string "Database name to use"
+	output yaml
+	default "sbtest"
+
+config SYSBENCH_DB_USERNAME
+	string "Database username to use"
+	output yaml
+	default "sbtest"
+
+config SYSBENCH_ROOT_DB_PASSWORD
+	string "The root database password to use"
+	output yaml
+	default "kdevops"
+
+config SYSBENCH_DB_PASSWORD
+	string "Database password to use for sysbench database"
+	output yaml
+	default "kdevops"
+
+config SYSBENCH_REPORT_INTERVAL
+	int "Sysbench report interval"
+	output yaml
+	default "2"
+
+config SYSBENCH_OLTP_TABLE_SIZE
+	int "Sysbench OLTP table size"
+	output yaml
+	default "100000"
+
+config SYSBENCH_OLTP_TABLE_COUNT
+	int "Sysbench OLTP table count"
+	output yaml
+	default "24"
+
+config SYSBENCH_THREADS
+	int "Sysbench number of threads"
+	output yaml
+	default "128"
+	help
+	  Set the number of threads for the sysbench test. Default is 128.
+
+config SYSBENCH_TEST_DURATION
+	int "Sysbench Test Duration (seconds)"
+	default 3600
+	output yaml
+	help
+	  Set the duration of the sysbench test in seconds. Default is 3600 (1 hour).
+
+config SYSBENCH_TELEMETRY_PATH
+	string "Where to collect telemetry information"
+	default "/data/sysbench-telemetry"
+	output yaml
+	help
+	  Set the path where to place telemetry information collected. By
+	  default we use a directory under /data/ directory as we strive
+	  to mkfs the filesystem mounted on /data/ as a separate filesystem
+	  than where we place the database. This is so we separate the IOs
+	  from telemetry and the database.
diff --git a/workflows/sysbench/Kconfig.docker b/workflows/sysbench/Kconfig.docker
new file mode 100644
index 000000000000..75c5263411fa
--- /dev/null
+++ b/workflows/sysbench/Kconfig.docker
@@ -0,0 +1,130 @@
+choice
+	prompt "What type of sysbench docker container do you want to use?"
+	default SYSBENCH_TYPE_MYSQL_DOCKER
+
+config SYSBENCH_TYPE_MYSQL_DOCKER
+	bool "Use MySQL with Docker for Sysbench"
+	output yaml
+	select SYSBENCH_DB_TYPE_MYSQL
+	help
+	  Enable this option to run sysbench using MySQL inside a Docker
+	  container. The benefit is having to install less package
+	  dependencies.
+
+endchoice
+
+if SYSBENCH_TYPE_MYSQL_DOCKER
+
+choice
+	prompt "Which MySQL container image to use?"
+	default SYSBENCH_MYSQL_CONTAINER_IMAGE_8_0
+
+config SYSBENCH_MYSQL_CONTAINER_IMAGE_8_0
+	bool "mysql:8.0"
+	output yaml
+	help
+	  Uses mysql:8.0 as the docker image.
+
+endchoice
+
+config SYSBENCH_MYSQL_CONTAINER_IMAGE_STRING
+	string
+	output yaml
+	default "mysql:8.0" if SYSBENCH_MYSQL_CONTAINER_IMAGE_8_0
+
+config SYSBENCH_MYSQL_CONTAINER_NAME
+	string "The local MySQL container to use for MySQL"
+	default "mysql-sysbench"
+	output yaml
+	help
+	  Set the name for the MySQL Docker container.
+
+config SYSBENCH_MYSQL_CONTAINER_PYTHON_PATH
+	string "The MySQL container pytyhon path"
+	default "/usr/local/lib/python3.9/site-packages" if SYSBENCH_MYSQL_CONTAINER_IMAGE_8_0
+	output yaml
+	help
+	  To support telemetry analysis we use a set of python packages
+	  which we install with python pip, when these are installed locally
+	  we need to inform MySQL where they are so that the mysqlsh can
+	  find them. This can vary depending on the MySQL container image
+	  used.
+
+config SYSBENCH_MYSQL_CONTAINER_HOST_CONFIG_PATH
+	string "Path on node where we'll place the mysql configuration"
+	default "/data/mysql.conf"
+	output yaml
+	help
+	  When using a container, we'll use a volume to propagate the actual
+	  MySQL configuration file used inside the container. This lets us make
+	  edits on the node.
+
+config SYSBENCH_MYSQL_CONTAINER_CONFIG
+	string "The MySQL container configuraiton file"
+	default "/etc/mysql/conf.d/mysql.conf" if SYSBENCH_MYSQL_CONTAINER_IMAGE_8_0
+	output yaml
+	help
+	  This is where we will place the MySQL configuration file on the
+	  target container. This can vary depending on the version of the
+	  docker container used.
+
+config SYSBENCH_MYSQL_CONTAINER_DB_PATH
+	string "MySQL container db path"
+	default "/var/lib/mysql" if SYSBENCH_MYSQL_CONTAINER_IMAGE_8_0
+	output yaml
+	help
+	  Where to place the database on the container.
+
+config SYSBENCH_MYSQL_CONTAINER_HOST_ROOT_PATH
+	string "Directory on the host to use as /root/ inside the container"
+	default "/data/myql-container-root"
+	output yaml
+	help
+	  When using a container, in order to support telemetry we rely on
+	  a mysqlsh plugin which we git clone on the node where we will
+	  run the container on. We use a container volume to let the container
+	  get access to this clone. This specificies the path on the node which
+	  we will use a docker volume passed onto the container as the
+	  /root/ directory.
+
+endif # SYSBENCH_TYPE_MYSQL_DOCKER
+
+choice
+	prompt "Which sysbench container image to use?"
+	default SYSBENCH_CONTAINER_SEVERALNINES_SYSBENCH
+
+config SYSBENCH_CONTAINER_SEVERALNINES_SYSBENCH
+	bool "severalnines/sysbench"
+	output yaml
+	help
+	  Use the severalnines/sysbench container when using sysbench.
+
+endchoice
+
+config SYSBENCH_CONTAINER_IMAGE_NAME
+	string
+	default "severalnines/sysbench" if SYSBENCH_CONTAINER_SEVERALNINES_SYSBENCH
+	output yaml
+
+config SYSBENCH_CONTAINER_NAME
+	string "The name of the container to use for sysbench"
+	default "sysbench-kdevops"
+	output yaml
+
+config SYSBENCH_LOCAL_DB_PORT
+	int "The actual local database port to use"
+	default "9901"
+	output yaml
+	help
+	  When using containers we may want to support running different
+	  databases. To support this and yet have the database be able to
+	  run on its own default database port we just use a local arbitrary
+	  port on the actual host, but inside the container the default port
+	  could be used.
+
+config SYSBENCH_DOCKER_TELEMETRY_PATH
+	string "The path to place telemetry information on the docker container"
+	default SYSBENCH_TELEMETRY_PATH
+	output yaml
+	help
+	  Where to place telemetry informaton inside the docker container.
diff --git a/workflows/sysbench/Kconfig.ext4 b/workflows/sysbench/Kconfig.ext4
new file mode 100644
index 000000000000..442b2a0249d9
--- /dev/null
+++ b/workflows/sysbench/Kconfig.ext4
@@ -0,0 +1,124 @@
+config SYSBENCH_FS_EXT4
+	bool "ext4"
+	output yaml
+	help
+	  Enable if you want to test sysbench against ext4.
+
+if SYSBENCH_FS_EXT4
+
+config SYSBENCH_SUPPORTED_FILESYSTEM_EXT4
+	bool
+	output yaml
+	default y
+
+choice
+    prompt "EXT4 filesystem sector size to use"
+    default SYSBENCH_EXT4_SECTOR_SIZE_4K
+
+config SYSBENCH_EXT4_SECTOR_SIZE_512
+	bool "512 bytes"
+	output yaml
+	depends on EXTRA_STORAGE_SUPPORTS_512
+	help
+	  Use 512 byte sector size.
+
+config SYSBENCH_EXT4_SECTOR_SIZE_4K
+	bool "4 KiB"
+	output yaml
+	depends on EXTRA_STORAGE_SUPPORTS_4K
+	help
+	  Use 4 KiB sector size.
+
+endchoice
+
+config SYSBENCH_EXT4_SECTOR_SIZE
+	string
+	output yaml
+	default "512" if SYSBENCH_EXT4_SECTOR_SIZE_512
+	default "4k"  if SYSBENCH_EXT4_SECTOR_SIZE_4K
+
+config SYSBENCH_EXT4_SECTION_4K
+	bool "ext4_4k"
+	output yaml
+	help
+	  This will create a host to test the sysbench on EXT4 with the
+	  following configuration which enables a 4k block size filesystem:
+
+	       mkfs.ext4 -F -b 4k
+
+config SYSBENCH_EXT4_SECTION_4K_CMD
+	string
+	depends on SYSBENCH_EXT4_SECTION_4K
+	output yaml
+	default "mkfs.ext4 -F -b 4k"
+
+config SYSBENCH_EXT4_SECTION_4K_DB_PAGE_SIZE
+	int
+	depends on SYSBENCH_EXT4_SECTION_4K
+	output yaml
+	default "4096"
+
+config SYSBENCH_EXT4_SECTION_4K_BIGALLOC_16K
+	bool "ext4_4k_bigalloc_16k"
+	output yaml
+	help
+	  This will create a host to test the sysbench with the following fs
+	  configuration, that is 8k data block size:
+
+	       mkfs.ext4 -F -b 4k -O bigalloc -C 16k
+
+config SYSBENCH_EXT4_SECTION_4K_BIGALLOC_16K_CMD
+	string
+	output yaml
+	depends on SYSBENCH_EXT4_SECTION_4K_BIGALLOC_16K
+	default "mkfs.ext4 -F -b 4k -O bigalloc -C 16k"
+
+config SYSBENCH_EXT4_SECTION_4K_BIGALLOC_16K_DB_PAGE_SIZE
+	int
+	output yaml
+	depends on SYSBENCH_EXT4_SECTION_4K_BIGALLOC_16K
+	default "16384"
+
+config SYSBENCH_EXT4_SECTION_4K_BIGALLOC_32K
+	bool "ext4_4k_bigalloc_32k"
+	output yaml
+	help
+	  This will create a host to test the sysbench with the following fs
+	  configuration, that is 4k data block size:
+
+	       mkfs.ext4 -F -b 4k -O bigalloc -C 32k
+
+config SYSBENCH_EXT4_SECTION_4K_BIGALLOC_32K_CMD
+	string
+	output yaml
+	depends on SYSBENCH_EXT4_SECTION_4K_BIGALLOC_32K
+	default "mkfs.ext4 -F -b 4k -O bigalloc -C 32k"
+
+config SYSBENCH_EXT4_SECTION_4K_BIGALLOC_32K_DB_PAGE_SIZE
+	int
+	output yaml
+	depends on SYSBENCH_EXT4_SECTION_4K_BIGALLOC_32K
+	default "32768"
+
+config SYSBENCH_EXT4_SECTION_4K_BIGALLOC_64K
+	bool "ext4_4k_bigalloc_64k"
+	output yaml
+	help
+	  This will create a host to test the sysbench with the following fs
+	  configuration, that is 4k data block size:
+
+	       mkfs.ext4 -F -b 4k -O bigalloc -C 64k
+
+config SYSBENCH_EXT4_SECTION_4K_BIGALLOC_64K_CMD
+	string
+	output yaml
+	depends on SYSBENCH_EXT4_SECTION_4K_BIGALLOC_64K
+	default "mkfs.ext4 -F -b 4k -O bigalloc -C 64k"
+
+config SYSBENCH_EXT4_SECTION_4K_BIGALLOC_64K_DB_PAGE_SIZE
+	int
+	output yaml
+	depends on SYSBENCH_EXT4_SECTION_4K_BIGALLOC_64K
+	default "65536"
+
+endif # SYSBENCH_FS_EXT4
diff --git a/workflows/sysbench/Kconfig.fs b/workflows/sysbench/Kconfig.fs
new file mode 100644
index 000000000000..75f163e22b9e
--- /dev/null
+++ b/workflows/sysbench/Kconfig.fs
@@ -0,0 +1,173 @@
+choice
+	prompt "What type of sysbench target test do you want to run?"
+	default SYSBENCH_TEST_ATOMICS
+
+config SYSBENCH_TEST_ATOMICS
+	bool "Large atomic write test"
+	output yaml
+	select KDEVOPS_BASELINE_AND_DEV
+	help
+	  This type of test is aimed of testing the emperical value of support
+	  for large atomics on storage devices and its impact on databases.
+	  Most drives today support only power-fail safe gaurantees when writing
+	  up to 4 KiB. Drives which support 16 KiB atomic writes or larger can
+	  take advanage of software features in databases which typically
+	  do software work arounds to gaurantee writes above 4 KiB will be
+	  recoverable in case of power failure.
+
+	  Different database have different features one can disable if one has
+	  support for an atomic matching the configured database page size. Below
+	  we list the features which an be disabled:
+
+	    * MySQL:
+		- innodb_doublewrite
+	    * PostgreSQL:
+		- full_page_writes
+
+	  In order to test disabling these features we need A/B testing for
+	  each target filesystem configuration we want to support testing.
+	  kdevops supports A/B testing with KDEVOPS_BASELINE_AND_DEV which
+	  creates an extra "dev" node for each target test we want. In the
+	  case of filesystem testing we thereore will end up with two nodes
+	  for each filesystem we want to test.
+
+	  Configurations for sysbench can vary depending on the target system.
+	  A sensible baseline configuration is selected depending on the type
+	  of bringup you configure with kdevops. Each type of virtualization,
+	  and cloud setup can have its own configuration database and / or
+	  sysbench configuration.
+
+endchoice
+
+if SYSBENCH_TEST_ATOMICS
+
+choice
+	prompt "What type of atomic test do you want to run?"
+	default SYSBENCH_TEST_ATOMICS_TPS_VARIABILITY
+
+config SYSBENCH_TEST_ATOMICS_TPS_VARIABILITY
+	bool "TPS variability test"
+	output yaml
+	select KDEVOPS_BASELINE_AND_DEV
+	help
+	  This test is designed to verify the TPS variabilility over long
+	  periods of time on a database when atomics are enabled.
+
+endchoice
+
+choice
+	prompt "When do you want to disable innodb_doublewrite?"
+	default SYSBENCH_DISABLE_DOUBLEWRITE_AUTO
+
+config SYSBENCH_DISABLE_DOUBLEWRITE_AUTO
+	bool "Use hostname postfix"
+	output yaml
+	help
+	  To allow for A/B testing this option will only disable the
+	  innodb_doublewrite on nodes which have a hostname which end
+	  with "-dev".
+
+config SYSBENCH_DISABLE_DOUBLEWRITE_ALWAYS
+	bool "Disable it always"
+	output yaml
+	help
+	  If you don't want to spawn nodes to do A/B testing and just want
+	  to test disabling the innodb_doublewrite enable this.
+
+endchoice
+
+config SYSBENCH_TEST_ATOMICS_XFS_16K_4KS_LBS
+	bool "XFS 16k LBS - 4k sector size"
+	select SYSBENCH_FS_XFS
+	select SYSBENCH_XFS_SECTOR_SIZE_4K
+	select SYSBENCH_XFS_SECTION_REFLINK_16K
+	output yaml
+	help
+	  This enables the XFS filesystem configuration to test 16k atomics
+	  with LBS.
+
+config SYSBENCH_TEST_ATOMICS_XFS_32K_4KS_LBS
+	bool "XFS 32k LBS - 4k sector size"
+	select SYSBENCH_FS_XFS
+	select SYSBENCH_XFS_SECTOR_SIZE_4K
+	select SYSBENCH_XFS_SECTION_REFLINK_32K
+	output yaml
+	help
+	  This enables the XFS filesystem configuration to test 32k atomics
+	  with LBS.
+
+config SYSBENCH_TEST_ATOMICS_XFS_64K_4KS_LBS
+	bool "XFS 64k LBS - 4k sector size"
+	select SYSBENCH_FS_XFS
+	select SYSBENCH_XFS_SECTOR_SIZE_4K
+	select SYSBENCH_XFS_SECTION_REFLINK_64K
+	output yaml
+	help
+	  This enables the XFS filesystem configuration to test 64k atomics
+	  with LBS.
+
+config SYSBENCH_TEST_ATOMICS_EXT4_4K_4KS_BIGALLOC_16K
+	bool "ext4 4k block size bigalloc 16k cluster sizes -  4k sector size"
+	select SYSBENCH_FS_EXT4
+	select SYSBENCH_EXT4_SECTOR_SIZE_4K
+	select SYSBENCH_EXT4_SECTION_4K_BIGALLOC_16K
+	output yaml
+	help
+	  This enables the ext4 filesystem configuration to test 16k atomics
+	  with a 4 KiB data and sector size and the bigalloc feature with 16k
+	  cluster sizes.
+
+config SYSBENCH_TEST_ATOMICS_EXT4_4K_4KS_BIGALLOC_32K
+	bool "ext4 4k block size bigalloc 32k cluster sizes -  4k sector size"
+	select SYSBENCH_FS_EXT4
+	select SYSBENCH_EXT4_SECTOR_SIZE_4K
+	select SYSBENCH_EXT4_SECTION_4K_BIGALLOC_32K
+	output yaml
+	help
+	  This enables the ext4 filesystem configuration to test 32k atomics
+	  with a 4 KiB data and sector size and the bigalloc feature with 32k
+	  cluster sizes.
+
+config SYSBENCH_TEST_ATOMICS_EXT4_4K_4KS_BIGALLOC_64K
+	bool "ext4 4k block size bigalloc 64k cluster sizes -  4k sector size"
+	select SYSBENCH_FS_EXT4
+	select SYSBENCH_EXT4_SECTOR_SIZE_4K
+	select SYSBENCH_EXT4_SECTION_4K_BIGALLOC_64K
+	output yaml
+	help
+	  This enables the ext4 filesystem configuration to test 32k atomics
+	  with a 4 KiB data and sector size and the bigalloc feature with 32k
+	  cluster sizes.
+
+endif # SYSBENCH_TEST_ATOMICS
+
+config SYSBENCH_DEVICE
+	string "Device to use to create a filesystem for sysbench tests"
+	output yaml
+	default "/dev/disk/by-id/nvme-QEMU_NVMe_Ctrl_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_NVME
+	default "/dev/disk/by-id/virtio-kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_VIRTIO
+	default "/dev/disk/by-id/ata-QEMU_HARDDISK_kdevops1" if LIBVIRT && LIBVIRT_EXTRA_STORAGE_DRIVE_IDE
+	default "/dev/nvme2n1" if TERRAFORM_AWS_INSTANCE_M5AD_4XLARGE
+	default "/dev/nvme1n1" if TERRAFORM_GCE
+	default "/dev/sdd" if TERRAFORM_AZURE
+	default TERRAFORM_OCI_SPARSE_VOLUME_DEVICE_FILE_NAME if TERRAFORM_OCI
+	help
+	  The device to use to create a filesystem where we will place the
+	  database.
+
+config SYSBENCH_LABEL
+	string "The label to use"
+	output yaml
+	default "sysbench_db"
+	help
+	  The label to use when creating the filesystem.
+
+config SYSBENCH_MNT
+	string "Mount point for the database"
+	output yaml
+	default "/db"
+	help
+	  The path where to mount the filesystem we'll use for the database.
+
+source "workflows/sysbench/Kconfig.xfs"
+source "workflows/sysbench/Kconfig.ext4"
diff --git a/workflows/sysbench/Kconfig.xfs b/workflows/sysbench/Kconfig.xfs
new file mode 100644
index 000000000000..adc0b031e71e
--- /dev/null
+++ b/workflows/sysbench/Kconfig.xfs
@@ -0,0 +1,162 @@
+config SYSBENCH_FS_XFS
+	bool "XFS"
+	output yaml
+	help
+	  Enable if you want to test sysbench against XFS.
+
+if SYSBENCH_FS_XFS
+
+config SYSBENCH_SUPPORTED_FILESYSTEM_XFS
+	bool
+	output yaml
+	default y
+
+choice
+    prompt "XFS filesystem sector size to use"
+    default SYSBENCH_XFS_SECTOR_SIZE_4K
+
+config SYSBENCH_XFS_SECTOR_SIZE_512
+	bool "512 bytes"
+	output yaml
+	depends on EXTRA_STORAGE_SUPPORTS_512
+	help
+	  Use 512 byte sector size.
+
+config SYSBENCH_XFS_SECTOR_SIZE_4K
+	bool "4 KiB"
+	output yaml
+	depends on EXTRA_STORAGE_SUPPORTS_4K
+	help
+	  Use 4 KiB sector size.
+
+config SYSBENCH_XFS_SECTOR_SIZE_16K
+	bool "16 KiB"
+	output yaml
+	depends on EXTRA_STORAGE_SUPPORTS_LARGEIO
+	help
+	  Use 16 KiB sector size.
+
+config SYSBENCH_XFS_SECTOR_SIZE_32K
+	bool "32 KiB"
+	output yaml
+	depends on EXTRA_STORAGE_SUPPORTS_LARGEIO
+	help
+	  Use 32 KiB sector size.
+
+endchoice
+
+config SYSBENCH_XFS_SECTOR_SIZE
+	string
+	output yaml
+	default "512" if SYSBENCH_XFS_SECTOR_SIZE_512
+	default "4k"  if SYSBENCH_XFS_SECTOR_SIZE_4K
+	default "16k" if SYSBENCH_XFS_SECTOR_SIZE_16K
+	default "32k" if SYSBENCH_XFS_SECTOR_SIZE_32K
+
+config SYSBENCH_XFS_SECTION_REFLINK_4K
+	bool "xfs_reflink_4k"
+	output yaml
+	help
+	  This will create a host to test the sysbench on XFS with the
+	  following configuration which enables reflink using 4096 bytes
+	  block size.
+
+	      mkfs.xfs -f -m reflink=1,rmapbt=1 -i sparse=1, -b size=4k
+
+config SYSBENCH_XFS_SECTION_REFLINK_4K_CMD
+	string
+	output yaml
+	depends on SYSBENCH_XFS_SECTION_REFLINK_4K
+	default "mkfs.xfs -f -m reflink=1,rmapbt=1, -i sparse=1 -b size=4k"
+
+config SYSBENCH_XFS_SECTION_REFLINK_4K_DB_PAGE_SIZE
+	int
+	output yaml
+	depends on SYSBENCH_XFS_SECTION_REFLINK_4K
+	default "4096"
+
+config SYSBENCH_XFS_SECTION_REFLINK_8K
+	bool "xfs_reflink_8k"
+	output yaml
+	help
+	  This will create a host to test the sysbench with the following fs
+	  configuration, that is 8k data block size:
+
+	      mkfs.xfs -f -m reflink=1,rmapbt=1 -i sparse=1 -b size=8k
+
+config SYSBENCH_XFS_SECTION_REFLINK_8K_CMD
+	string
+	output yaml
+	depends on SYSBENCH_XFS_SECTION_REFLINK_8K
+	default "mkfs.xfs -f -m reflink=1,rmapbt=1 -i sparse=1 -b size=8k"
+
+config SYSBENCH_XFS_SECTION_REFLINK_8K_DB_PAGE_SIZE
+	int
+	output yaml
+	depends on SYSBENCH_XFS_SECTION_REFLINK_8K
+	default "8192"
+
+config SYSBENCH_XFS_SECTION_REFLINK_16K
+	bool "xfs_reflink_16k"
+	output yaml
+	help
+	  This will create a host to test the sysbench with the following fs
+	  configuration, that is 16k data block size:
+
+	      mkfs.xfs -f -m reflink=1,rmapbt=1 -i sparse=1 -b size=16k
+
+config SYSBENCH_XFS_SECTION_REFLINK_16K_CMD
+	string
+	output yaml
+	depends on SYSBENCH_XFS_SECTION_REFLINK_16K
+	default "mkfs.xfs -f -m reflink=1,rmapbt=1 -i sparse=1 -b size=16k"
+
+config SYSBENCH_XFS_SECTION_REFLINK_16K_DB_PAGE_SIZE
+	int
+	output yaml
+	depends on SYSBENCH_XFS_SECTION_REFLINK_16K
+	default "16384"
+
+config SYSBENCH_XFS_SECTION_REFLINK_32K
+	bool "xfs_reflink_32k"
+	output yaml
+	help
+	  This will create a host to test the sysbench with the following fs
+	  configuration, that is with 32k sector size
+
+	      mkfs.xfs -f -m reflink=1,rmapbt=1 -i sparse=1 -b size=32k
+
+config SYSBENCH_XFS_SECTION_REFLINK_32K_CMD
+	string
+	output yaml
+	depends on SYSBENCH_XFS_SECTION_REFLINK_32K
+	default "mkfs.xfs -f -m reflink=1,rmapbt=1 -i sparse=1 -b size=32k"
+
+config SYSBENCH_XFS_SECTION_REFLINK_32K_DB_PAGE_SIZE
+	int
+	output yaml
+	depends on SYSBENCH_XFS_SECTION_REFLINK_32K
+	default "32768"
+
+config SYSBENCH_XFS_SECTION_REFLINK_64K
+	bool "xfs_reflink_64k"
+	output yaml
+	help
+	  This will create a host to test the sysbench with the following fs
+	  configuration, that is with 64k sector size
+
+	      mkfs.xfs -f -m reflink=1,rmapbt=1, -i sparse=1 -b size=64k
+
+config SYSBENCH_XFS_SECTION_REFLINK_64K_CMD
+	string
+	output yaml
+	depends on SYSBENCH_XFS_SECTION_REFLINK_64K
+	default "mkfs.xfs -f -m reflink=1,rmapbt=1 -i sparse=1 -b size=64k"
+
+config SYSBENCH_XFS_SECTION_REFLINK_64K_DB_PAGE_SIZE
+	string
+	output yaml
+	depends on SYSBENCH_XFS_SECTION_REFLINK_64K
+	default "65536"
+
+endif # SYSBENCH_FS_XFS
diff --git a/workflows/sysbench/Makefile b/workflows/sysbench/Makefile
new file mode 100644
index 000000000000..9238ddc723a5
--- /dev/null
+++ b/workflows/sysbench/Makefile
@@ -0,0 +1,66 @@
+PHONY += sysbench sysbench-test sysbench-telemetry sysbench-help-menu
+
+ifeq (y,$(CONFIG_WORKFLOWS_DEDICATED_WORKFLOW))
+export KDEVOPS_HOSTS_TEMPLATE := sysbench.j2
+endif
+
+TAGS_SYSBENCH_RUN := db_start
+TAGS_SYSBENCH_RUN += post_entrypoint
+TAGS_SYSBENCH_RUN += populate_sbtest
+TAGS_SYSBENCH_RUN += run_sysbench
+TAGS_SYSBENCH_RUN += telemetry
+TAGS_SYSBENCH_RUN += logs
+TAGS_SYSBENCH_RUN += results
+
+# Tags for running sysbench tests
+TAGS_SYSBENCH_TEST := vars
+TAGS_SYSBENCH_TEST += $(TAGS_SYSBENCH_RUN)
+
+# Tags for collecting telemetry only
+TAGS_SYSBENCH_TELEMETRY := vars
+TAGS_SYSBENCH_TELEMETRY += telemetry
+
+# Tags for collecting results only
+TAGS_SYSBENCH_RESULTS := vars
+TAGS_SYSBENCH_RESULTS += results
+
+# Target to set up sysbench (MySQL or PostgreSQL)
+sysbench:
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		-i hosts playbooks/sysbench.yml \
+		--skip-tags $(subst $(space),$(comma),$(TAGS_SYSBENCH_RUN)),clean
+
+# Target to run sysbench tests (including telemetry)
+sysbench-test:
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		-i hosts playbooks/sysbench.yml \
+		--tags $(subst $(space),$(comma),$(TAGS_SYSBENCH_TEST))
+
+# Optional target to collect telemetry
+sysbench-telemetry:
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		-i hosts playbooks/sysbench.yml \
+		--tags $(subst $(space),$(comma),$(TAGS_SYSBENCH_TELEMETRY))
+
+# Optional target to collect all results
+sysbench-results:
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		-i hosts playbooks/sysbench.yml \
+		--tags $(subst $(space),$(comma),$(TAGS_SYSBENCH_RESULTS))
+
+sysbench-results-clean:
+	$(Q)ansible-playbook $(ANSIBLE_VERBOSE) \
+		-i hosts playbooks/sysbench.yml \
+		--tags vars,clean
+
+# Help target to show available options
+sysbench-help-menu:
+	@echo "Sysbench options:"
+	@echo "sysbench                          - Set up sysbench (MySQL or PostgreSQL)"
+	@echo "sysbench-test                     - Run sysbench tests and collect results (with telemetry)"
+	@echo "sysbench-telemetry                - Gather sysbench telemetry data on each node"
+	@echo "sysbench-results                  - Collect all sysbench results onto local host"
+	@echo "sysbench-results-clean            - Remove any previous results on node and host"
+	@echo ""
+
+HELP_TARGETS += sysbench-help-menu
-- 
2.45.2


^ permalink raw reply related	[flat|nested] only message in thread

only message in thread, other threads:[~2024-09-14  0:53 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-14  0:53 [PATCH] workflows: add sysbench MySQL TPS variability support Luis Chamberlain

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox