* [PATCH blktests v4 0/3] bcache: add initial test cases
@ 2026-02-12 15:23 Daniel Wagner
2026-02-12 15:23 ` [PATCH blktests v4 1/3] bcache: add bcache/001 Daniel Wagner
` (3 more replies)
0 siblings, 4 replies; 10+ messages in thread
From: Daniel Wagner @ 2026-02-12 15:23 UTC (permalink / raw)
To: Christoph Hellwig, Stephen Zhang, Kent Overstreet, Coly Li,
Shin'ichiro Kawasaki, Johannes Thumshirn, linux-block,
linux-bcache, Daniel Wagner
I've updated the v3 version with the feedback from Shinichiro for v2.
Shinichiro, please note I did rewrite some of the logic in v3, thus some of your
comments didn't apply. But hopefully I don't made a big mess :)
Cheers,
Daniel
[1] https://lore.kernel.org/linux-bcache/CANubcdX7eNbH_bo4-f94DUbdiEbt04Vxy1MPyhm+CZyXB01FuQ@mail.gmail.com/
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
Changes in v4:
- changed file mode to 755 for 001 and 002
- changed license to GPL-3.0+
- use group_requires
- fixed whitespace damage
- dropped unnecessary '|| true'
- added 'local' for local variables
- added wait loop for register interface to show up
- updated documentation
- Link to v3: https://patch.msgid.link/20260122-bcache-v3-0-2c02d15a4503@suse.de
Changes in v3:
- add bcache/002
- return created bcache devices to tests case
- made cleanup more robust (handling detached cache)
- track all resources correctly
- operatoe only in final cleanup on known devices
- Link to v2: https://patch.msgid.link/20260121-bcache-v2-0-b26af185e63a@suse.de
Changes in v2:
- fixed whitespace damage
- added documentation on how to configure for bcache tests
- do registering explicitly
- made disk wiping more robust
- Link to v1: https://patch.msgid.link/20260120-bcache-v1-1-59bf0b2d4140@suse.de
---
Daniel Wagner (3):
bcache: add bcache/001
bcache: add bcache/002
doc: document how to configure bcache tests
Documentation/running-tests.md | 10 ++
tests/bcache/001 | 44 +++++
tests/bcache/001.out | 3 +
tests/bcache/002 | 62 +++++++
tests/bcache/002.out | 2 +
tests/bcache/rc | 375 +++++++++++++++++++++++++++++++++++++++++
6 files changed, 496 insertions(+)
---
base-commit: e387a7e0169cc012eb6a7140a0561d2901c92a76
change-id: 20260120-bcache-35ec7368c8f4
Best regards,
--
Daniel Wagner <dwagner@suse.de>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH blktests v4 1/3] bcache: add bcache/001
2026-02-12 15:23 [PATCH blktests v4 0/3] bcache: add initial test cases Daniel Wagner
@ 2026-02-12 15:23 ` Daniel Wagner
2026-02-17 7:42 ` Shinichiro Kawasaki
2026-02-12 15:23 ` [PATCH blktests v4 2/3] bcache: add bcache/002 Daniel Wagner
` (2 subsequent siblings)
3 siblings, 1 reply; 10+ messages in thread
From: Daniel Wagner @ 2026-02-12 15:23 UTC (permalink / raw)
To: Christoph Hellwig, Stephen Zhang, Kent Overstreet, Coly Li,
Shin'ichiro Kawasaki, Johannes Thumshirn, linux-block,
linux-bcache, Daniel Wagner
So far we are missing tests for bcache. Besides a relative simple
setup/teardown tests add also the corresponding infrastructure. More
tests are to be expected to depend on this.
_create_bcache/_remove_bcache are tracking the resources and if anything
is missing it will complain.
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
tests/bcache/001 | 44 ++++++
tests/bcache/001.out | 3 +
tests/bcache/rc | 375 +++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 422 insertions(+)
diff --git a/tests/bcache/001 b/tests/bcache/001
new file mode 100755
index 000000000000..7258d87566cb
--- /dev/null
+++ b/tests/bcache/001
@@ -0,0 +1,44 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-3.0+
+# Copyright (C) 2026 Daniel Wagner, SUSE Labs
+#
+# Test bcache setup and teardown
+
+. tests/bcache/rc
+
+DESCRIPTION="test bcache setup and teardown"
+
+test_device_array() {
+ echo "Running ${TEST_NAME}"
+
+ if [[ ${#TEST_DEV_ARRAY[@]} -lt 3 ]]; then
+ SKIP_REASONS+=("requires at least 3 devices")
+ return 1
+ fi
+
+ _setup_bcache "${TEST_DEV_ARRAY[@]}"
+
+ local bcache_nodes
+
+ mapfile -t bcache_nodes < <(_create_bcache \
+ --cache "${TEST_DEV_ARRAY[0]##*/}" \
+ --bdev "${TEST_DEV_ARRAY[1]##*/}" \
+ --writeback)
+
+ echo "number of bcaches: ${#bcache_nodes[*]}"
+
+ _remove_bcache --bcache "${bcache_nodes[@]}" \
+ --cache "${TEST_DEV_ARRAY[0]##*/}" \
+ --bdev "${TEST_DEV_ARRAY[1]##*/}" \
+
+ mapfile -t bcache_nodes < <(_create_bcache \
+ --cache "${TEST_DEV_ARRAY[0]##*/}" \
+ --bdev "${TEST_DEV_ARRAY[1]##*/}" "${TEST_DEV_ARRAY[2]##*/}" \
+ --writeback)
+
+ echo "number of bcaches: ${#bcache_nodes[*]}"
+
+ _remove_bcache --bcache "${bcache_nodes[@]}" \
+ --cache "${TEST_DEV_ARRAY[0]##*/}" \
+ --bdev "${TEST_DEV_ARRAY[1]##*/}" "${TEST_DEV_ARRAY[2]##*/}"
+}
diff --git a/tests/bcache/001.out b/tests/bcache/001.out
new file mode 100644
index 000000000000..844154e13822
--- /dev/null
+++ b/tests/bcache/001.out
@@ -0,0 +1,3 @@
+Running bcache/001
+number of bcaches: 1
+number of bcaches: 2
diff --git a/tests/bcache/rc b/tests/bcache/rc
new file mode 100644
index 000000000000..cfd4094c2fe0
--- /dev/null
+++ b/tests/bcache/rc
@@ -0,0 +1,375 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-3.0+
+# Copyright (C) 2026 Daniel Wagner, SUSE Labs
+
+. common/rc
+
+declare BCACHE_DEVS_LIST
+
+BCACHE_MAX_RETRIES=5
+
+group_requires() {
+ _have_kernel_options MD BCACHE BCACHE_DEBUG AUTOFS_FS
+ _have_program make-bcache
+ _have_crypto_algorithm crc32c
+}
+
+_bcache_wipe_devs() {
+ local devs=("$@")
+ local dev
+
+ for dev in "${devs[@]}"; do
+ # Attempt a clean wipe first
+ if wipefs --all --quiet "${dev}" 2>/dev/null; then
+ continue
+ fi
+
+ # Overwrite the first 10MB to clear stubborn partition tables or metadata
+ if ! dd if=/dev/zero of="${dev}" bs=1M count=10 conv=notrunc status=none; then
+ echo "Error: dd failed on ${dev}" >&2
+ fi
+
+ # Wipe the Tail (Last 5MB)
+ # bcache often places backup superblocks at the end of the device.
+ local dev_size_mb
+ dev_size_mb=$(blockdev --getsize64 "$dev" | awk '{print int($1 / 1024 / 1024)}')
+
+ if [ "$dev_size_mb" -gt 10 ]; then
+ local seek_pos=$((dev_size_mb - 5))
+ dd if=/dev/zero of="${dev}" bs=1M count=5 seek=$seek_pos conv=fsync status=none
+ fi
+
+ # Refresh kernel partition table & wait for udev
+ partprobe "$dev" 2>/dev/null
+ udevadm settle
+
+ # Try wiping again after clearing the headers
+ if ! wipefs --all --quiet --force "${dev}"; then
+ echo "Warning: Failed to wipe ${dev} even after dd." >&2
+ fi
+ done
+}
+
+_bcache_register() {
+ local devs=("$@")
+ local dev timeout=0
+
+ while [[ ! -w /sys/fs/bcache/register ]] && (( timeout < 10 )); do
+ sleep 1
+ (( timeout ++ ))
+ done
+
+ if [[ ! -w /sys/fs/bcache/register ]]; then
+ echo "ERROR: bcache registration interface not found." >&2
+ return 1
+ fi
+
+ for dev in "${devs[@]}"; do
+ local tmp_err
+
+ tmp_err="/tmp/bcache_reg_$$.err"
+ if ! echo "${dev}" > /sys/fs/bcache/register 2> "${tmp_err}"; then
+ local err_msg
+
+ err_msg=$(< "${tmp_err}")
+ if [[ "${err_msg}" != *"Device or resource busy"* ]]; then
+ echo "ERROR: Failed to register ${dev}: ${err_msg:-"Unknown error"}" >&2
+ fi
+ fi
+ rm -f "${tmp_err}"
+ done
+}
+
+_create_bcache() {
+ local -a cdevs=()
+ local -a bdevs=()
+ local -a ARGS=()
+ local -a created_devs=()
+ local bucket_size="64k"
+ local block_size="4k"
+
+ while [[ $# -gt 0 ]]; do
+ case $1 in
+ --cache)
+ shift
+ # Collect arguments until the next flag or end of input
+ while [[ $# -gt 0 && ! $1 =~ ^-- ]]; do
+ cdevs+=("$1")
+ shift
+ done
+ ;;
+ --bdev)
+ shift
+ # Collect arguments until the next flag or end of input
+ while [[ $# -gt 0 && ! $1 =~ ^-- ]]; do
+ bdevs+=("$1")
+ shift
+ done
+ ;;
+ --bucket-size)
+ bucket_size="$2"
+ shift 2
+ ;;
+ --block-size)
+ block_size="$2"
+ shift 2
+ ;;
+ --writeback)
+ ARGS+=(--writeback)
+ shift 1
+ ;;
+ --discard)
+ ARGS+=(--discard)
+ shift 1
+ ;;
+ *)
+ echo "WARNING: unknown argument: $1"
+ shift
+ ;;
+ esac
+ done
+
+ # add /dev prefix to device names
+ cdevs=( "${cdevs[@]/#/\/dev\/}" )
+ bdevs=( "${bdevs[@]/#/\/dev\/}" )
+
+ # make-bcache expects empty/cleared devices
+ _bcache_wipe_devs "${cdevs[@]}" "${bdevs[@]}"
+
+ local -a cmd
+ cmd=(make-bcache --wipe-bcache \
+ --bucket "${bucket_size}" \
+ --block "${block_size}")
+ for dev in "${cdevs[@]}"; do cmd+=("--cache" "${dev}"); done
+ for dev in "${bdevs[@]}"; do cmd+=("--bdev" "${dev}"); done
+ cmd+=("${ARGS[@]}")
+
+ local output rc
+ output=$("${cmd[@]}" 2>&1)
+ rc="$?"
+ if [[ "${rc}" -ne 0 ]]; then
+ echo "ERROR: make-bcache failed:" >&2
+ echo "$output" >&2
+ return 1
+ fi
+
+ local cset_uuid
+ cset_uuid=$(echo "$output" | awk '/Set UUID:/ {print $3}' | head -n 1)
+ if [[ -z "${cset_uuid}" ]]; then
+ echo "ERROR: Could not extract cset UUID from make-bcache output" >&2
+ return 1
+ fi
+
+ local -a bdev_uuids
+ mapfile -t bdev_uuids < <(echo "$output" | awk '
+ $1 == "UUID:" { last_uuid = $2 }
+ $1 == "version:" && $2 == "1" { print last_uuid}
+ ')
+
+ _bcache_register "${cdevs[@]}" "${bdevs[@]}"
+ udevadm settle
+
+ for uuid in "${bdev_uuids[@]}"; do
+ local link found
+
+ link=/dev/bcache/by-uuid/"${uuid}"
+ found=false
+
+ for ((i=0; i<BCACHE_MAX_RETRIES; i++)); do
+ if [[ -L "${link}" ]]; then
+ created_devs+=("$(readlink -f "${link}")")
+ found=true
+ break
+ fi
+
+ # poke udev to create the links
+ udevadm trigger "block/$(basename "$(readlink -f "${link}" 2>/dev/null || echo "notfound")")" 2>/dev/null
+ sleep 1
+ done
+
+ if [[ "${found}" == "false" ]]; then
+ echo "WARNING: Could not find device node for UUID ${uuid} after ${BCACHE_MAX_RETRIES}s" >&2
+ fi
+ done
+
+ printf "%s\n" "${created_devs[@]}"
+}
+
+_remove_bcache() {
+ local -a cdevs=()
+ local -a bdevs=()
+ local -a csets=()
+ local -a bcache_devs=()
+ local uuid
+
+ while [[ $# -gt 0 ]]; do
+ case $1 in
+ --cache)
+ shift
+ # Collect arguments until the next flag or end of input
+ while [[ $# -gt 0 && ! $1 =~ ^-- ]]; do
+ cdevs+=("$1")
+ shift
+ done
+ ;;
+ --bdev)
+ shift
+ # Collect arguments until the next flag or end of input
+ while [[ $# -gt 0 && ! $1 =~ ^-- ]]; do
+ bdevs+=("$1")
+ shift
+ done
+ ;;
+ --bcache)
+ shift
+ # Collect arguments until the next flag or end of input
+ while [[ $# -gt 0 && ! $1 =~ ^-- ]]; do
+ bcache_devs+=("$1")
+ shift
+ done
+ ;;
+ *)
+ echo "WARNING: unknown argument: $1"
+ shift
+ ;;
+ esac
+ done
+
+ for dev in "${bcache_devs[@]}"; do
+ local bcache bcache_dir
+
+ if mountpoint -q "${dev}" 2>/dev/null; then
+ umount -l "${dev}"
+ fi
+
+ bcache="${dev##*/}"
+ bcache_dir=/sys/block/"${bcache}"/bcache
+ if [ -f "${bcache_dir}"/stop ]; then
+ echo 1 > "${bcache_dir}"/stop
+ fi
+ done
+
+ # The cache could be detached, thus go through all caches and
+ # look for the cdev in there.
+ local cset_path
+ for cset_path in /sys/fs/bcache/*-*-*-*-*; do
+ local cache_link match_found
+
+ match_found=false
+ for cache_link in "${cset_path}"/cache[0-9]*; do
+ local full_sys_path _cdev cdev
+
+ full_sys_path="$(readlink -f "$cache_link")"
+ _cdev="$(basename "${full_sys_path%/bcache}")"
+
+ for cdev in "${cdevs[@]}"; do
+ if [ "${_cdev}" == "$(basename "${cdev}")" ]; then
+ match_found=true
+ break 2
+ fi
+ done
+ done
+
+ if [ "${match_found}" = false ]; then
+ continue
+ fi
+
+ cset="$(basename "${cset_path}")"
+ if [ -d /sys/fs/bcache/"${cset}" ]; then
+ echo 1 > /sys/fs/bcache/"${cset}"/unregister
+ csets+=("${cset}")
+ fi
+ done
+
+ udevadm settle
+
+ local timeout
+ for cset in "${csets[@]}"; do
+ timeout=0
+ while [[ -d /sys/fs/bcache/"${cset}" ]] && (( timeout < 10 )); do
+ sleep 0.5
+ (( timeout++ ))
+ done
+ done
+
+ _bcache_wipe_devs "${cdevs[@]}" "${bdevs[@]}"
+}
+
+_cleanup_bcache() {
+ local cset dev bcache bcache_devs cset_path
+ local -a csets=()
+
+ read -r -a bcache_devs <<< "${BCACHE_DEVS_LIST:-}"
+
+ # Don't let successive Ctrl-Cs interrupt the cleanup processes
+ trap '' SIGINT
+
+ shopt -s nullglob
+ for bcache in /sys/block/bcache* ; do
+ [ -e "${bcache}" ] || continue
+
+ if [[ -f "${bcache}/bcache/backing_dev_name" ]]; then
+ bdev=$(basename "$(cat "${bcache}/bcache/backing_dev_name")")
+
+ for dev in "${bcache_devs[@]}"; do
+ if [[ "${bdev}" == "$(basename "${dev}")" ]]; then
+ echo "WARNING: Stopping bcache device ${bdev}"
+ echo 1 > /sys/block/"${bdev}"/bcache/stop 2>/dev/null
+ break
+ fi
+ done
+ fi
+ done
+
+ for cset_path in /sys/fs/bcache/*-*-*-*-*; do
+ local cache_link match_found
+
+ match_found=false
+ for cache_link in "${cset_path}"/cache[0-9]*; do
+ local full_sys_path cdev
+
+ full_sys_path="$(readlink -f "$cache_link")"
+ cdev="$(basename "${full_sys_path%/bcache}")"
+
+ for dev in "${bcache_devs[@]}"; do
+ if [ "${cdev}" == "$(basename "${dev}")" ]; then
+ match_found=true
+ break 2
+ fi
+ done
+ done
+
+ if [ "${match_found}" = false ]; then
+ continue
+ fi
+
+ cset="$(basename "${cset_path}")"
+ if [ -d /sys/fs/bcache/"${cset}" ]; then
+ echo "WARNING: Unregistering cset $(basename "${cset}")"
+ echo 1 > /sys/fs/bcache/"${cset}"/unregister
+ csets+=("${cset}")
+ fi
+ done
+ shopt -u nullglob
+
+ udevadm settle
+
+ local timeout
+ for cset in "${csets[@]}"; do
+ timeout=0
+ while [[ -d /sys/fs/bcache/"${cset}" ]] && (( timeout < 10 )); do
+ sleep 0.5
+ (( timeout++ ))
+ done
+ done
+
+ _bcache_wipe_devs "${bcache_devs[@]}"
+
+ trap SIGINT
+}
+
+_setup_bcache() {
+ BCACHE_DEVS_LIST="$*"
+
+ _register_test_cleanup _cleanup_bcache
+}
--
2.53.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH blktests v4 2/3] bcache: add bcache/002
2026-02-12 15:23 [PATCH blktests v4 0/3] bcache: add initial test cases Daniel Wagner
2026-02-12 15:23 ` [PATCH blktests v4 1/3] bcache: add bcache/001 Daniel Wagner
@ 2026-02-12 15:23 ` Daniel Wagner
2026-02-17 7:50 ` Shinichiro Kawasaki
2026-02-12 15:23 ` [PATCH blktests v4 3/3] doc: document how to configure bcache tests Daniel Wagner
2026-03-02 13:54 ` [PATCH blktests v4 0/3] bcache: add initial test cases Daniel Wagner
3 siblings, 1 reply; 10+ messages in thread
From: Daniel Wagner @ 2026-02-12 15:23 UTC (permalink / raw)
To: Christoph Hellwig, Stephen Zhang, Kent Overstreet, Coly Li,
Shin'ichiro Kawasaki, Johannes Thumshirn, linux-block,
linux-bcache, Daniel Wagner
Add test case from Stephen Zhang [1].
[1] https://lore.kernel.org/linux-bcache/CANubcdX7eNbH_bo4-f94DUbdiEbt04Vxy1MPyhm+CZyXB01FuQ@mail.gmail.com/
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
tests/bcache/002 | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++++
tests/bcache/002.out | 2 ++
2 files changed, 64 insertions(+)
diff --git a/tests/bcache/002 b/tests/bcache/002
new file mode 100755
index 000000000000..c27178a90c2d
--- /dev/null
+++ b/tests/bcache/002
@@ -0,0 +1,62 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-3.0+
+# Copyright (C) 2026 Daniel Wagner, SUSE Labs
+#
+# Test based on Stephen Zhang <starzhangzsd@gmail.com> test case
+# https://lore.kernel.org/linux-bcache/CANubcdX7eNbH_bo4-f94DUbdiEbt04Vxy1MPyhm+CZyXB01FuQ@mail.gmail.com/#t
+#
+# Test bcache for bio leaks in clone
+
+. tests/bcache/rc
+
+DESCRIPTION="test bcache for bio leaks in clone"
+
+requires() {
+ _have_fio
+ _have_program iostat
+}
+
+test_device_array() {
+ echo "Running ${TEST_NAME}"
+
+ if [[ ${#TEST_DEV_ARRAY[@]} -lt 2 ]]; then
+ SKIP_REASONS+=("requires at least 2 devices")
+ return 1
+ fi
+
+ _setup_bcache "${TEST_DEV_ARRAY[@]}"
+
+ local bcache_nodes bcache_dev bdev_name fio_pid
+
+ mapfile -t bcache_nodes < <(_create_bcache \
+ --cache "${TEST_DEV_ARRAY[0]##*/}" \
+ --bdev "${TEST_DEV_ARRAY[1]##*/}" \
+ --writeback)
+
+ bcache_dev="${bcache_nodes[0]}"
+ bdev_name="$(basename "${bcache_dev}")"
+ echo 1 > /sys/block/"${bdev_name}"/bcache/detach
+
+ state="$(cat /sys/block/"${bdev_name}"/bcache/state)"
+ echo "Device state: ${state}"
+
+ _run_fio_rand_io --filename="${bcache_dev}" --time_base \
+ --runtime=30 >> "$FULL" 2>&1 &
+ fio_pid=$!
+
+ sleep 5
+
+ local stats_line util
+ stats_line=$(iostat -x 1 2 "${bdev_name}" | grep -w "${bdev_name}" | tail -n 1)
+ util="$(echo "${stats_line}" | awk '{print $NF}')"
+
+ if (( $(echo "${util} > 1.0" | bc -l) )); then
+ echo "ERROR: Accounting leak detected!"
+ fi
+
+ { pkill -f "fio.*${bcache_dev}"; wait "${fio_pid}"; } &> /dev/null
+
+ _remove_bcache --bcache "${bcache_nodes[@]}" \
+ --cache "${TEST_DEV_ARRAY[0]##*/}" \
+ --bdev "${TEST_DEV_ARRAY[1]##*/}"
+}
diff --git a/tests/bcache/002.out b/tests/bcache/002.out
new file mode 100644
index 000000000000..529c1a90b135
--- /dev/null
+++ b/tests/bcache/002.out
@@ -0,0 +1,2 @@
+Running bcache/002
+Device state: no cache
--
2.53.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH blktests v4 3/3] doc: document how to configure bcache tests
2026-02-12 15:23 [PATCH blktests v4 0/3] bcache: add initial test cases Daniel Wagner
2026-02-12 15:23 ` [PATCH blktests v4 1/3] bcache: add bcache/001 Daniel Wagner
2026-02-12 15:23 ` [PATCH blktests v4 2/3] bcache: add bcache/002 Daniel Wagner
@ 2026-02-12 15:23 ` Daniel Wagner
2026-03-02 13:54 ` [PATCH blktests v4 0/3] bcache: add initial test cases Daniel Wagner
3 siblings, 0 replies; 10+ messages in thread
From: Daniel Wagner @ 2026-02-12 15:23 UTC (permalink / raw)
To: Christoph Hellwig, Stephen Zhang, Kent Overstreet, Coly Li,
Shin'ichiro Kawasaki, Johannes Thumshirn, linux-block,
linux-bcache, Daniel Wagner
Add a bcache entry in running-tests which explains how to configure
blktests for the bcache tests.
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
Documentation/running-tests.md | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/Documentation/running-tests.md b/Documentation/running-tests.md
index f9da042bb3a0..7d935aad3331 100644
--- a/Documentation/running-tests.md
+++ b/Documentation/running-tests.md
@@ -189,6 +189,16 @@ THROTL_BLKDEV_TYPES="sdebug" ./check throtl/
THROTL_BLKDEV_TYPES="nullb sdebug" ./check throtl/
```
+### Bcache test configuration
+
+The bcache tests require multiple devices to run simultaneously. By default,
+blktests run each test case for each device in TEST_DEVS. This behavior
+prevents testing with multiple devices. The TEST_CASE_DEV_ARRAY resolves this by
+enabling multiple device configurations per test. Bcache tests need at
+least three devices, which can be specified in your configuration as follows:
+
+TEST_CASE_DEV_ARRAY[bcache/*]="/dev/nvme0n1 /dev/vdb /dev/vdc"
+
### Normal user
To run test cases which require normal user privilege, prepare a user and
--
2.53.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH blktests v4 1/3] bcache: add bcache/001
2026-02-12 15:23 ` [PATCH blktests v4 1/3] bcache: add bcache/001 Daniel Wagner
@ 2026-02-17 7:42 ` Shinichiro Kawasaki
0 siblings, 0 replies; 10+ messages in thread
From: Shinichiro Kawasaki @ 2026-02-17 7:42 UTC (permalink / raw)
To: Daniel Wagner
Cc: hch@infradead.org, Stephen Zhang, Kent Overstreet, Coly Li,
Johannes Thumshirn, linux-block@vger.kernel.org,
linux-bcache@vger.kernel.org
Thanks. I ran this test case and it passes in my environment. Good.
Please find my comments in line. Most of them are nit ones, and I do no
care them much. Just wanted to hear your opinion about my comment on partprobe.
On Feb 12, 2026 / 16:23, Daniel Wagner wrote:
[...]
> diff --git a/tests/bcache/001 b/tests/bcache/001
> new file mode 100755
> index 000000000000..7258d87566cb
> --- /dev/null
> +++ b/tests/bcache/001
[...]
> +test_device_array() {
> + echo "Running ${TEST_NAME}"
> +
> + if [[ ${#TEST_DEV_ARRAY[@]} -lt 3 ]]; then
> + SKIP_REASONS+=("requires at least 3 devices")
> + return 1
> + fi
> +
> + _setup_bcache "${TEST_DEV_ARRAY[@]}"
> +
> + local bcache_nodes
Nit: I think "local -a bcache_nodes" is the better.
[...]
> diff --git a/tests/bcache/rc b/tests/bcache/rc
> new file mode 100644
> index 000000000000..cfd4094c2fe0
> --- /dev/null
> +++ b/tests/bcache/rc
> @@ -0,0 +1,375 @@
> +#!/bin/bash
> +# SPDX-License-Identifier: GPL-3.0+
> +# Copyright (C) 2026 Daniel Wagner, SUSE Labs
> +
> +. common/rc
> +
> +declare BCACHE_DEVS_LIST
> +
> +BCACHE_MAX_RETRIES=5
> +
> +group_requires() {
> + _have_kernel_options MD BCACHE BCACHE_DEBUG AUTOFS_FS
> + _have_program make-bcache
> + _have_crypto_algorithm crc32c
> +}
> +
> +_bcache_wipe_devs() {
> + local devs=("$@")
> + local dev
> +
> + for dev in "${devs[@]}"; do
> + # Attempt a clean wipe first
> + if wipefs --all --quiet "${dev}" 2>/dev/null; then
> + continue
> + fi
> +
> + # Overwrite the first 10MB to clear stubborn partition tables or metadata
> + if ! dd if=/dev/zero of="${dev}" bs=1M count=10 conv=notrunc status=none; then
> + echo "Error: dd failed on ${dev}" >&2
> + fi
> +
> + # Wipe the Tail (Last 5MB)
> + # bcache often places backup superblocks at the end of the device.
> + local dev_size_mb
> + dev_size_mb=$(blockdev --getsize64 "$dev" | awk '{print int($1 / 1024 / 1024)}')
> +
> + if [ "$dev_size_mb" -gt 10 ]; then
> + local seek_pos=$((dev_size_mb - 5))
> + dd if=/dev/zero of="${dev}" bs=1M count=5 seek=$seek_pos conv=fsync status=none
> + fi
> +
> + # Refresh kernel partition table & wait for udev
> + partprobe "$dev" 2>/dev/null
I think _have_program partprobe is required, or can we replace it with
"blockdev --rereadpt"?
> + udevadm settle
> +
> + # Try wiping again after clearing the headers
> + if ! wipefs --all --quiet --force "${dev}"; then
> + echo "Warning: Failed to wipe ${dev} even after dd." >&2
> + fi
> + done
> +}
> +
> +_bcache_register() {
> + local devs=("$@")
> + local dev timeout=0
> +
> + while [[ ! -w /sys/fs/bcache/register ]] && (( timeout < 10 )); do
> + sleep 1
> + (( timeout ++ ))
> + done
> +
> + if [[ ! -w /sys/fs/bcache/register ]]; then
> + echo "ERROR: bcache registration interface not found." >&2
> + return 1
> + fi
> +
> + for dev in "${devs[@]}"; do
> + local tmp_err
> +
> + tmp_err="/tmp/bcache_reg_$$.err"
> + if ! echo "${dev}" > /sys/fs/bcache/register 2> "${tmp_err}"; then
> + local err_msg
> +
> + err_msg=$(< "${tmp_err}")
> + if [[ "${err_msg}" != *"Device or resource busy"* ]]; then
> + echo "ERROR: Failed to register ${dev}: ${err_msg:-"Unknown error"}" >&2
> + fi
> + fi
> + rm -f "${tmp_err}"
> + done
> +}
> +
> +_create_bcache() {
> + local -a cdevs=()
> + local -a bdevs=()
> + local -a ARGS=()
> + local -a created_devs=()
> + local bucket_size="64k"
> + local block_size="4k"
Nit: I think "local dev" can be added here.
> +
> + while [[ $# -gt 0 ]]; do
> + case $1 in
> + --cache)
> + shift
> + # Collect arguments until the next flag or end of input
> + while [[ $# -gt 0 && ! $1 =~ ^-- ]]; do
> + cdevs+=("$1")
> + shift
> + done
> + ;;
> + --bdev)
> + shift
> + # Collect arguments until the next flag or end of input
> + while [[ $# -gt 0 && ! $1 =~ ^-- ]]; do
> + bdevs+=("$1")
> + shift
> + done
> + ;;
> + --bucket-size)
> + bucket_size="$2"
> + shift 2
> + ;;
> + --block-size)
> + block_size="$2"
> + shift 2
> + ;;
> + --writeback)
> + ARGS+=(--writeback)
> + shift 1
> + ;;
> + --discard)
> + ARGS+=(--discard)
> + shift 1
> + ;;
> + *)
> + echo "WARNING: unknown argument: $1"
> + shift
> + ;;
> + esac
> + done
> +
> + # add /dev prefix to device names
> + cdevs=( "${cdevs[@]/#/\/dev\/}" )
> + bdevs=( "${bdevs[@]/#/\/dev\/}" )
> +
> + # make-bcache expects empty/cleared devices
> + _bcache_wipe_devs "${cdevs[@]}" "${bdevs[@]}"
> +
> + local -a cmd
> + cmd=(make-bcache --wipe-bcache \
> + --bucket "${bucket_size}" \
> + --block "${block_size}")
> + for dev in "${cdevs[@]}"; do cmd+=("--cache" "${dev}"); done
> + for dev in "${bdevs[@]}"; do cmd+=("--bdev" "${dev}"); done
> + cmd+=("${ARGS[@]}")
> +
> + local output rc
> + output=$("${cmd[@]}" 2>&1)
> + rc="$?"
> + if [[ "${rc}" -ne 0 ]]; then
> + echo "ERROR: make-bcache failed:" >&2
> + echo "$output" >&2
> + return 1
> + fi
> +
> + local cset_uuid
> + cset_uuid=$(echo "$output" | awk '/Set UUID:/ {print $3}' | head -n 1)
> + if [[ -z "${cset_uuid}" ]]; then
> + echo "ERROR: Could not extract cset UUID from make-bcache output" >&2
> + return 1
> + fi
> +
> + local -a bdev_uuids
> + mapfile -t bdev_uuids < <(echo "$output" | awk '
> + $1 == "UUID:" { last_uuid = $2 }
> + $1 == "version:" && $2 == "1" { print last_uuid}
> + ')
> +
> + _bcache_register "${cdevs[@]}" "${bdevs[@]}"
> + udevadm settle
> +
> + for uuid in "${bdev_uuids[@]}"; do
> + local link found
> +
> + link=/dev/bcache/by-uuid/"${uuid}"
> + found=false
> +
> + for ((i=0; i<BCACHE_MAX_RETRIES; i++)); do
> + if [[ -L "${link}" ]]; then
> + created_devs+=("$(readlink -f "${link}")")
> + found=true
> + break
> + fi
> +
> + # poke udev to create the links
> + udevadm trigger "block/$(basename "$(readlink -f "${link}" 2>/dev/null || echo "notfound")")" 2>/dev/null
> + sleep 1
> + done
> +
> + if [[ "${found}" == "false" ]]; then
> + echo "WARNING: Could not find device node for UUID ${uuid} after ${BCACHE_MAX_RETRIES}s" >&2
> + fi
> + done
> +
> + printf "%s\n" "${created_devs[@]}"
> +}
> +
> +_remove_bcache() {
> + local -a cdevs=()
> + local -a bdevs=()
> + local -a csets=()
> + local -a bcache_devs=()
> + local uuid
> +
> + while [[ $# -gt 0 ]]; do
> + case $1 in
> + --cache)
> + shift
> + # Collect arguments until the next flag or end of input
> + while [[ $# -gt 0 && ! $1 =~ ^-- ]]; do
> + cdevs+=("$1")
> + shift
> + done
> + ;;
> + --bdev)
> + shift
> + # Collect arguments until the next flag or end of input
> + while [[ $# -gt 0 && ! $1 =~ ^-- ]]; do
> + bdevs+=("$1")
> + shift
> + done
> + ;;
> + --bcache)
> + shift
> + # Collect arguments until the next flag or end of input
> + while [[ $# -gt 0 && ! $1 =~ ^-- ]]; do
> + bcache_devs+=("$1")
> + shift
> + done
> + ;;
> + *)
> + echo "WARNING: unknown argument: $1"
> + shift
> + ;;
> + esac
> + done
> +
> + for dev in "${bcache_devs[@]}"; do
> + local bcache bcache_dir
> +
> + if mountpoint -q "${dev}" 2>/dev/null; then
> + umount -l "${dev}"
Nit: The -q and -l options can be replaced with longer one for readability,
--quiet and --lazy.
> + fi
> +
> + bcache="${dev##*/}"
> + bcache_dir=/sys/block/"${bcache}"/bcache
> + if [ -f "${bcache_dir}"/stop ]; then
> + echo 1 > "${bcache_dir}"/stop
> + fi
> + done
> +
> + # The cache could be detached, thus go through all caches and
> + # look for the cdev in there.
> + local cset_path
> + for cset_path in /sys/fs/bcache/*-*-*-*-*; do
> + local cache_link match_found
> +
> + match_found=false
> + for cache_link in "${cset_path}"/cache[0-9]*; do
> + local full_sys_path _cdev cdev
> +
> + full_sys_path="$(readlink -f "$cache_link")"
> + _cdev="$(basename "${full_sys_path%/bcache}")"
> +
> + for cdev in "${cdevs[@]}"; do
> + if [ "${_cdev}" == "$(basename "${cdev}")" ]; then
> + match_found=true
> + break 2
> + fi
> + done
> + done
> +
> + if [ "${match_found}" = false ]; then
> + continue
> + fi
> +
> + cset="$(basename "${cset_path}")"
> + if [ -d /sys/fs/bcache/"${cset}" ]; then
> + echo 1 > /sys/fs/bcache/"${cset}"/unregister
> + csets+=("${cset}")
> + fi
> + done
> +
> + udevadm settle
> +
> + local timeout
> + for cset in "${csets[@]}"; do
> + timeout=0
> + while [[ -d /sys/fs/bcache/"${cset}" ]] && (( timeout < 10 )); do
> + sleep 0.5
> + (( timeout++ ))
> + done
> + done
> +
> + _bcache_wipe_devs "${cdevs[@]}" "${bdevs[@]}"
> +}
> +
> +_cleanup_bcache() {
> + local cset dev bcache bcache_devs cset_path
Nit: I think 'bdev' can be added in the list above.
> + local -a csets=()
> +
> + read -r -a bcache_devs <<< "${BCACHE_DEVS_LIST:-}"
> +
> + # Don't let successive Ctrl-Cs interrupt the cleanup processes
> + trap '' SIGINT
> +
> + shopt -s nullglob
> + for bcache in /sys/block/bcache* ; do
> + [ -e "${bcache}" ] || continue
> +
> + if [[ -f "${bcache}/bcache/backing_dev_name" ]]; then
> + bdev=$(basename "$(cat "${bcache}/bcache/backing_dev_name")")
> +
> + for dev in "${bcache_devs[@]}"; do
> + if [[ "${bdev}" == "$(basename "${dev}")" ]]; then
> + echo "WARNING: Stopping bcache device ${bdev}"
> + echo 1 > /sys/block/"${bdev}"/bcache/stop 2>/dev/null
> + break
> + fi
> + done
> + fi
> + done
> +
> + for cset_path in /sys/fs/bcache/*-*-*-*-*; do
> + local cache_link match_found
> +
> + match_found=false
> + for cache_link in "${cset_path}"/cache[0-9]*; do
> + local full_sys_path cdev
> +
> + full_sys_path="$(readlink -f "$cache_link")"
> + cdev="$(basename "${full_sys_path%/bcache}")"
> +
> + for dev in "${bcache_devs[@]}"; do
> + if [ "${cdev}" == "$(basename "${dev}")" ]; then
> + match_found=true
> + break 2
> + fi
> + done
> + done
> +
> + if [ "${match_found}" = false ]; then
> + continue
> + fi
> +
> + cset="$(basename "${cset_path}")"
> + if [ -d /sys/fs/bcache/"${cset}" ]; then
> + echo "WARNING: Unregistering cset $(basename "${cset}")"
> + echo 1 > /sys/fs/bcache/"${cset}"/unregister
> + csets+=("${cset}")
> + fi
> + done
> + shopt -u nullglob
> +
> + udevadm settle
> +
> + local timeout
> + for cset in "${csets[@]}"; do
> + timeout=0
> + while [[ -d /sys/fs/bcache/"${cset}" ]] && (( timeout < 10 )); do
> + sleep 0.5
> + (( timeout++ ))
> + done
> + done
> +
> + _bcache_wipe_devs "${bcache_devs[@]}"
> +
> + trap SIGINT
> +}
> +
> +_setup_bcache() {
> + BCACHE_DEVS_LIST="$*"
> +
> + _register_test_cleanup _cleanup_bcache
> +}
>
> --
> 2.53.0
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH blktests v4 2/3] bcache: add bcache/002
2026-02-12 15:23 ` [PATCH blktests v4 2/3] bcache: add bcache/002 Daniel Wagner
@ 2026-02-17 7:50 ` Shinichiro Kawasaki
0 siblings, 0 replies; 10+ messages in thread
From: Shinichiro Kawasaki @ 2026-02-17 7:50 UTC (permalink / raw)
To: Daniel Wagner
Cc: hch@infradead.org, Stephen Zhang, Kent Overstreet, Coly Li,
Johannes Thumshirn, linux-block@vger.kernel.org,
linux-bcache@vger.kernel.org
On Feb 12, 2026 / 16:23, Daniel Wagner wrote:
> Add test case from Stephen Zhang [1].
>
> [1] https://lore.kernel.org/linux-bcache/CANubcdX7eNbH_bo4-f94DUbdiEbt04Vxy1MPyhm+CZyXB01FuQ@mail.gmail.com/
>
> Signed-off-by: Daniel Wagner <dwagner@suse.de>
Thank you for adding this patch. When I ran this test case in my test
environment using the kernel v6.19, it failed:
runtime 7.536s ... 7.214s
--- tests/bcache/002.out 2026-02-14 21:16:20.918000000 +0900
+++ /home/shin/Blktests/blktests/results/nvme0n1_nvme2n1_nvme3n1_nvme4n1/bcache/002.out.bad 2026-02-16 14:29:25.596000000 +0900
@@ -1,2 +1,3 @@
Running bcache/002
Device state: no cache
+ERROR: Accounting leak detected!
Is this failure expected?
And let me leave a few nit comments in line.
> ---
> tests/bcache/002 | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> tests/bcache/002.out | 2 ++
> 2 files changed, 64 insertions(+)
>
> diff --git a/tests/bcache/002 b/tests/bcache/002
> new file mode 100755
> index 000000000000..c27178a90c2d
> --- /dev/null
> +++ b/tests/bcache/002
[...]
> +test_device_array() {
> + echo "Running ${TEST_NAME}"
> +
> + if [[ ${#TEST_DEV_ARRAY[@]} -lt 2 ]]; then
> + SKIP_REASONS+=("requires at least 2 devices")
> + return 1
> + fi
> +
> + _setup_bcache "${TEST_DEV_ARRAY[@]}"
> +
> + local bcache_nodes bcache_dev bdev_name fio_pid
Nit: I think bcache_nodes should be declared as an array, with -a option.
Also, 'state' can be added to this local var list.
> +
> + mapfile -t bcache_nodes < <(_create_bcache \
> + --cache "${TEST_DEV_ARRAY[0]##*/}" \
> + --bdev "${TEST_DEV_ARRAY[1]##*/}" \
> + --writeback)
> +
> + bcache_dev="${bcache_nodes[0]}"
> + bdev_name="$(basename "${bcache_dev}")"
> + echo 1 > /sys/block/"${bdev_name}"/bcache/detach
> +
> + state="$(cat /sys/block/"${bdev_name}"/bcache/state)"
> + echo "Device state: ${state}"
> +
> + _run_fio_rand_io --filename="${bcache_dev}" --time_base \
> + --runtime=30 >> "$FULL" 2>&1 &
> + fio_pid=$!
> +
> + sleep 5
> +
> + local stats_line util
> + stats_line=$(iostat -x 1 2 "${bdev_name}" | grep -w "${bdev_name}" | tail -n 1)
> + util="$(echo "${stats_line}" | awk '{print $NF}')"
> +
> + if (( $(echo "${util} > 1.0" | bc -l) )); then
Nit: bc -l option can be --mathlib for readability.
> + echo "ERROR: Accounting leak detected!"
> + fi
> +
> + { pkill -f "fio.*${bcache_dev}"; wait "${fio_pid}"; } &> /dev/null
Nit: pkill -f option can be --full for readability.
> +
> + _remove_bcache --bcache "${bcache_nodes[@]}" \
> + --cache "${TEST_DEV_ARRAY[0]##*/}" \
> + --bdev "${TEST_DEV_ARRAY[1]##*/}"
> +}
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH blktests v4 0/3] bcache: add initial test cases
2026-02-12 15:23 [PATCH blktests v4 0/3] bcache: add initial test cases Daniel Wagner
` (2 preceding siblings ...)
2026-02-12 15:23 ` [PATCH blktests v4 3/3] doc: document how to configure bcache tests Daniel Wagner
@ 2026-03-02 13:54 ` Daniel Wagner
2026-03-03 0:57 ` Shinichiro Kawasaki
3 siblings, 1 reply; 10+ messages in thread
From: Daniel Wagner @ 2026-03-02 13:54 UTC (permalink / raw)
To: Christoph Hellwig, Stephen Zhang, Kent Overstreet, Coly Li,
Shin'ichiro Kawasaki, Johannes Thumshirn, linux-block,
linux-bcache
On Thu, Feb 12, 2026 at 04:23:30PM +0100, Daniel Wagner wrote:
> I've updated the v3 version with the feedback from Shinichiro for v2.
> Shinichiro, please note I did rewrite some of the logic in v3, thus some of your
> comments didn't apply. But hopefully I don't made a big mess :)
ping
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH blktests v4 0/3] bcache: add initial test cases
2026-03-02 13:54 ` [PATCH blktests v4 0/3] bcache: add initial test cases Daniel Wagner
@ 2026-03-03 0:57 ` Shinichiro Kawasaki
2026-03-03 8:04 ` Daniel Wagner
0 siblings, 1 reply; 10+ messages in thread
From: Shinichiro Kawasaki @ 2026-03-03 0:57 UTC (permalink / raw)
To: Daniel Wagner
Cc: hch@infradead.org, Stephen Zhang, Kent Overstreet, Coly Li,
Johannes Thumshirn, linux-block@vger.kernel.org,
linux-bcache@vger.kernel.org
On Mar 02, 2026 / 14:54, Daniel Wagner wrote:
> On Thu, Feb 12, 2026 at 04:23:30PM +0100, Daniel Wagner wrote:
> > I've updated the v3 version with the feedback from Shinichiro for v2.
> > Shinichiro, please note I did rewrite some of the logic in v3, thus some of your
> > comments didn't apply. But hopefully I don't made a big mess :)
>
> ping
Daniel, I wonder if my replies [1][2] reached your mail box.
[1] https://lore.kernel.org/linux-block/aZQZkjEUw9VnVauX@shinmob/
[2] https://lore.kernel.org/linux-block/aZQcH-d-ZcgtMoJb@shinmob/
Today, I tried this patch series again on v7.0-rc2 kernel with the
TEST_CASE_DEV_ARRAY below:
TEST_CASE_DEV_ARRAY[bcache/*]="/dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme4n1"
And found that new files named "nvme?n1" are created in the current directory.
Do you see the files created in your environment?
Also I saw the failure with the message "ERROR: Accounting leak detected!" as I
noted in [2].
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH blktests v4 0/3] bcache: add initial test cases
2026-03-03 0:57 ` Shinichiro Kawasaki
@ 2026-03-03 8:04 ` Daniel Wagner
2026-03-04 6:41 ` Stephen Zhang
0 siblings, 1 reply; 10+ messages in thread
From: Daniel Wagner @ 2026-03-03 8:04 UTC (permalink / raw)
To: Shinichiro Kawasaki
Cc: hch@infradead.org, Stephen Zhang, Kent Overstreet, Coly Li,
Johannes Thumshirn, linux-block@vger.kernel.org,
linux-bcache@vger.kernel.org
Hi Shinichiro,
On Tue, Mar 03, 2026 at 12:57:18AM +0000, Shinichiro Kawasaki wrote:
> On Mar 02, 2026 / 14:54, Daniel Wagner wrote:
> > On Thu, Feb 12, 2026 at 04:23:30PM +0100, Daniel Wagner wrote:
> > > I've updated the v3 version with the feedback from Shinichiro for v2.
> > > Shinichiro, please note I did rewrite some of the logic in v3, thus some of your
> > > comments didn't apply. But hopefully I don't made a big mess :)
> >
> > ping
>
> Daniel, I wonder if my replies [1][2] reached your mail box.
Sorry, no, they didn't. Yet another hickup with our mail server...
> [1] https://lore.kernel.org/linux-block/aZQZkjEUw9VnVauX@shinmob/
> [2] https://lore.kernel.org/linux-block/aZQcH-d-ZcgtMoJb@shinmob/
I'll work on this then.
> Today, I tried this patch series again on v7.0-rc2 kernel with the
> TEST_CASE_DEV_ARRAY below:
>
> TEST_CASE_DEV_ARRAY[bcache/*]="/dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme4n1"
>
> And found that new files named "nvme?n1" are created in the current directory.
> Do you see the files created in your environment?
Will check.
> Also I saw the failure with the message "ERROR: Accounting leak detected!" as I
> noted in [2].
I am not really sure if this test case should go in the current form.
I've just used Stephen's test case as input for figuring out what kind
of API is useful. I also see the leak error with v7.0-rc1 which has
3ef825dfd4e4 ("bcache: use bio cloning for detached device requests").
I might have done something wrong here.
BTW, iostat could be replaced by reading directly from sysfs,
e.g. /sys/block/nvme0n1/stat
Thanks,
Daniel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH blktests v4 0/3] bcache: add initial test cases
2026-03-03 8:04 ` Daniel Wagner
@ 2026-03-04 6:41 ` Stephen Zhang
0 siblings, 0 replies; 10+ messages in thread
From: Stephen Zhang @ 2026-03-04 6:41 UTC (permalink / raw)
To: Daniel Wagner
Cc: Shinichiro Kawasaki, hch@infradead.org, Kent Overstreet, Coly Li,
Johannes Thumshirn, linux-block@vger.kernel.org,
linux-bcache@vger.kernel.org
Daniel Wagner <dwagner@suse.de> 于2026年3月3日周二 16:04写道:
>
> Hi Shinichiro,
>
> On Tue, Mar 03, 2026 at 12:57:18AM +0000, Shinichiro Kawasaki wrote:
> > On Mar 02, 2026 / 14:54, Daniel Wagner wrote:
> > > On Thu, Feb 12, 2026 at 04:23:30PM +0100, Daniel Wagner wrote:
> > > > I've updated the v3 version with the feedback from Shinichiro for v2.
> > > > Shinichiro, please note I did rewrite some of the logic in v3, thus some of your
> > > > comments didn't apply. But hopefully I don't made a big mess :)
> > >
> > > ping
> >
> > Daniel, I wonder if my replies [1][2] reached your mail box.
>
> Sorry, no, they didn't. Yet another hickup with our mail server...
>
> > [1] https://lore.kernel.org/linux-block/aZQZkjEUw9VnVauX@shinmob/
> > [2] https://lore.kernel.org/linux-block/aZQcH-d-ZcgtMoJb@shinmob/
>
> I'll work on this then.
>
> > Today, I tried this patch series again on v7.0-rc2 kernel with the
> > TEST_CASE_DEV_ARRAY below:
> >
> > TEST_CASE_DEV_ARRAY[bcache/*]="/dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme4n1"
> >
> > And found that new files named "nvme?n1" are created in the current directory.
> > Do you see the files created in your environment?
>
> Will check.
>
> > Also I saw the failure with the message "ERROR: Accounting leak detected!" as I
> > noted in [2].
>
> I am not really sure if this test case should go in the current form.
> I've just used Stephen's test case as input for figuring out what kind
> of API is useful. I also see the leak error with v7.0-rc1 which has
> 3ef825dfd4e4 ("bcache: use bio cloning for detached device requests").
> I might have done something wrong here.
>
Hi,
I noticed an issue with the test. The 100% utilization shown by iostat
during active I/O is NORMAL! The test is flawed - it's checking
utilization while fio is still running.
The test does:
1. Start fio in background with --runtime=30
2. Sleep only 5 seconds
3. Check iostat immediately (while fio is still active!)
A real leak would show persistent utilization AFTER all I/O completes.
During active I/O, 100% utilization is expected behavior.
To properly detect a leak, the test should:
1. Wait for fio to complete (wait $fio_pid)
2. Wait a few seconds for I/O to drain
3. Then check iostat
If utilization is still > 0% after I/O completes, then there's a real
accounting leak.
Thanks,
Shida
> BTW, iostat could be replaced by reading directly from sysfs,
> e.g. /sys/block/nvme0n1/stat
>
> Thanks,
> Daniel
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2026-03-04 6:41 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-12 15:23 [PATCH blktests v4 0/3] bcache: add initial test cases Daniel Wagner
2026-02-12 15:23 ` [PATCH blktests v4 1/3] bcache: add bcache/001 Daniel Wagner
2026-02-17 7:42 ` Shinichiro Kawasaki
2026-02-12 15:23 ` [PATCH blktests v4 2/3] bcache: add bcache/002 Daniel Wagner
2026-02-17 7:50 ` Shinichiro Kawasaki
2026-02-12 15:23 ` [PATCH blktests v4 3/3] doc: document how to configure bcache tests Daniel Wagner
2026-03-02 13:54 ` [PATCH blktests v4 0/3] bcache: add initial test cases Daniel Wagner
2026-03-03 0:57 ` Shinichiro Kawasaki
2026-03-03 8:04 ` Daniel Wagner
2026-03-04 6:41 ` Stephen Zhang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox