* [PATCH blktests v2 0/2] bcache: add bcache/001
@ 2026-01-21 14:36 Daniel Wagner
2026-01-21 14:36 ` [PATCH blktests v2 1/2] " Daniel Wagner
2026-01-21 14:36 ` [PATCH blktests v2 2/2] doc: document how to configure bcache tests Daniel Wagner
0 siblings, 2 replies; 7+ messages in thread
From: Daniel Wagner @ 2026-01-21 14:36 UTC (permalink / raw)
To: Christoph Hellwig, Stephen Zhang, Kent Overstreet, Coly Li,
Shin'ichiro Kawasaki, Johannes Thumshirn, linux-block,
linux-bcache, Daniel Wagner
I've got it working. The problem is that udevd sometimes doesn't register the
devices in a timely fashion. The workaround is to explicitly register the
devices within the tests. This races with udevd, thus some logic is in there to
filter out 'already registered warnings'. Eventually someone should look at
this...
Also made the wiping of the disks a bit more robust by stealing the ideas from
Stephen's test case[1]. Thanks!
As requested I also added an entry in running-tests.md. There is already a
section on TEST_CASE_DEV_ARRAY so it's a bit redudant but given it took me a
while to figure this out, I think it's worth to have it around.
Now, the test runs very stable in my VM. No issues in hundres of of runs:
bcache/001 => nvme0n1 vdb vdc (test bcache setup and teardown) [passed]
runtime 4.659s ... 3.555s
The runtime is not really constant, one of the symptions with all the
workarounds in place to get it working (retry loops, white loops etc).
[ 5327.588585][T100690] run blktests bcache/001 at 2026-01-21 14:33:45
[ 5327.742119][T100766] bcache: run_cache_set() invalidating existing data
[ 5327.747790][T100766] bcache: register_cache() registered cache device nvme0n1
[ 5327.782118][T100690] bcache: register_bcache() error : device already registered
[ 5327.790339][T100690] bcache: register_bdev() registered backing device vdb
[ 5327.798633][T100690] bcache: bch_cached_dev_attach() Caching vdb as bcache0 on set 33fc88e8-3251-433e-be43-620f625c
[ 5329.598292][T100396] bcache: bcache_device_free() bcache0 stopped
[ 5329.898146][T94893] bcache: cache_set_free() Cache set 33fc88e8-3251-433e-be43-620f6253e0fc unregistered
[ 5330.315859][T100803] bcache: run_cache_set() invalidating existing data
[ 5330.324537][T100803] bcache: register_cache() registered cache device nvme0n1
[ 5330.330337][T100809] bcache: register_bdev() registered backing device vdc
[ 5330.339168][T100809] bcache: bch_cached_dev_attach() Caching vdc as bcache0 on set 986fa332-8e02-4a1e-942c-b6f3458e
[ 5330.345113][T100813] bcache: register_bdev() registered backing device vdb
[ 5330.362462][T100813] bcache: bch_cached_dev_attach() Caching vdb as bcache1 on set 986fa332-8e02-4a1e-942c-b6f3458e
[ 5330.466850][T100690] bcache: register_bcache() error : device already registered
[ 5330.469039][T100690] bcache: register_bcache() error : device already registered
[ 5330.470933][T100690] bcache: register_bcache() error : device already registered
[ 5330.472971][T100690] bcache: register_bcache() error : device already registered
[ 5330.474847][T100690] bcache: register_bcache() error : device already registered
[ 5330.522678][T11701] bcache: bcache_device_free() bcache1 stopped
[ 5330.571838][T11701] bcache: bcache_device_free() bcache0 stopped
[ 5330.805725][T80064] bcache: cache_set_free() Cache set 986fa332-8e02-4a1e-942c-b6f3458470fe unregistered
The next step is to figure out how to hand back the devices to calling test, so
the test doesn't have to hardcode the test device. From Stephen's test:
[...]
# 2. Detach
log "Detaching backing device..."
BDEV_NAME=$(basename $BCACHE_DEV)
echo 1 | sudo tee /sys/block/$BDEV_NAME/bcache/detach > /dev/null
[...]
I am sure there is some bash-way to do this, which makes me blind. But hey, if
it works, I wont complain.
[1] https://lore.kernel.org/linux-bcache/CANubcdX7eNbH_bo4-f94DUbdiEbt04Vxy1MPyhm+CZyXB01FuQ@mail.gmail.com/
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
Changes in v2:
- fixed whitespace damage
- added documentation on how to configure for bcache tests
- do registering explicitly
- made disk wiping more robust
- Link to v1: https://patch.msgid.link/20260120-bcache-v1-1-59bf0b2d4140@suse.de
---
Daniel Wagner (2):
bcache: add bcache/001
doc: document how to configure bcache tests
Documentation/running-tests.md | 16 +++
tests/bcache/001 | 32 +++++
tests/bcache/001.out | 1 +
tests/bcache/rc | 259 +++++++++++++++++++++++++++++++++++++++++
4 files changed, 308 insertions(+)
---
base-commit: e387a7e0169cc012eb6a7140a0561d2901c92a76
change-id: 20260120-bcache-35ec7368c8f4
Best regards,
--
Daniel Wagner <dwagner@suse.de>
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH blktests v2 1/2] bcache: add bcache/001
2026-01-21 14:36 [PATCH blktests v2 0/2] bcache: add bcache/001 Daniel Wagner
@ 2026-01-21 14:36 ` Daniel Wagner
2026-01-21 16:08 ` Daniel Wagner
2026-01-22 8:05 ` Shinichiro Kawasaki
2026-01-21 14:36 ` [PATCH blktests v2 2/2] doc: document how to configure bcache tests Daniel Wagner
1 sibling, 2 replies; 7+ messages in thread
From: Daniel Wagner @ 2026-01-21 14:36 UTC (permalink / raw)
To: Christoph Hellwig, Stephen Zhang, Kent Overstreet, Coly Li,
Shin'ichiro Kawasaki, Johannes Thumshirn, linux-block,
linux-bcache, Daniel Wagner
So far we are missing tests for bcache. Besides a relative simple
setup/teardown tests add also the corresponding infrastructure. More
tests are to be expected to depend on this.
_create_bcache/_remove_bcache are tracking the resources and if anything
is missing it will complain.
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
tests/bcache/001 | 32 +++++++
tests/bcache/001.out | 1 +
tests/bcache/rc | 259 +++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 292 insertions(+)
diff --git a/tests/bcache/001 b/tests/bcache/001
new file mode 100644
index 000000000000..4a6e01113b6b
--- /dev/null
+++ b/tests/bcache/001
@@ -0,0 +1,32 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2026 Daniel Wagner, SUSE Labs
+#
+# Test bcache setup and teardown
+
+. tests/bcache/rc
+
+DESCRIPTION="test bcache setup and teardown"
+
+requires() {
+ _bcache_requires
+}
+
+test_device_array() {
+ echo "Running ${TEST_NAME}"
+
+ if [[ ${#TEST_DEV_ARRAY[@]} -lt 3 ]]; then
+ SKIP_REASONS+=("requires at least 3 devices")
+ return 1
+ fi
+
+ _create_bcache \
+ --cache "${TEST_DEV_ARRAY[0]##*/}" \
+ --bdev "${TEST_DEV_ARRAY[1]##*/}"
+ _remove_bcache
+
+ _create_bcache \
+ --cache "${TEST_DEV_ARRAY[0]##*/}" \
+ --bdev "${TEST_DEV_ARRAY[1]##*/}" "${TEST_DEV_ARRAY[2]##*/}"
+ _remove_bcache
+}
diff --git a/tests/bcache/001.out b/tests/bcache/001.out
new file mode 100644
index 000000000000..f890aed2736c
--- /dev/null
+++ b/tests/bcache/001.out
@@ -0,0 +1 @@
+Running bcache/001
diff --git a/tests/bcache/rc b/tests/bcache/rc
new file mode 100644
index 000000000000..3dba6c85b1ee
--- /dev/null
+++ b/tests/bcache/rc
@@ -0,0 +1,259 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2026 Daniel Wagner, SUSE Labs
+
+. common/rc
+
+declare -a BCACHE_DEVS=()
+declare -a BCACHE_BDEVS=()
+declare -a BCACHE_CSETS=()
+
+BCACHE_MAX_RETRIES=5
+
+_bcache_requires() {
+ _have_kernel_options MD BCACHE BCACHE_DEBUG AUTOFS_FS
+ _have_program make-bcache
+ _have_crypto_algorithm crc32c
+}
+
+_bcache_wipe_devs() {
+ for dev in "${BCACHE_DEVS[@]}"; do
+ # Attempt a clean wipe first
+ if wipefs --all --quiet "${dev}" 2>/dev/null; then
+ continue
+ fi
+
+ # Overwrite the first 10MB to clear stubborn partition tables or metadata
+ if ! dd if=/dev/zero of="${dev}" bs=1M count=10 conv=notrunc status=none; then
+ echo "Error: dd failed on ${dev}" >&2
+ fi
+
+ # Try wiping again after clearing the headers
+ if ! wipefs --all --quiet --force "${dev}"; then
+ echo "Warning: Failed to wipe ${dev} even after dd." >&2
+ fi
+ done
+}
+
+_bcache_register() {
+ if [[ ! -w /sys/fs/bcache/register ]]; then
+ echo "ERROR: bcache registration interface not found." >&2
+ return 1
+ fi
+
+ for dev in "${BCACHE_DEVS[@]}"; do
+ if ! echo "${dev}" > /sys/fs/bcache/register 2>/tmp/bcache_err; then
+ err_msg=$(< /tmp/bcache_err)
+
+ if [[ "${err_msg}" != *"Device or resource busy"* ]]; then
+ echo "ERROR: Failed to register ${dev}: ${err_msg:-"Unknown error"}" >&2
+ fi
+ fi
+ done
+}
+
+_create_bcache() {
+ local -a cdevs=()
+ local -a bdevs=()
+ local -a ARGS=()
+ local bucket_size="64k"
+ local block_size="4k"
+
+ _register_test_cleanup _cleanup_bcache
+
+ while [[ $# -gt 0 ]]; do
+ case $1 in
+ --cache)
+ shift
+ # Collect arguments until the next flag or end of input
+ while [[ $# -gt 0 && ! $1 =~ ^-- ]]; do
+ cdevs+=("$1")
+ shift
+ done
+ ;;
+ --bdev)
+ shift
+ # Collect arguments until the next flag or end of input
+ while [[ $# -gt 0 && ! $1 =~ ^-- ]]; do
+ bdevs+=("$1")
+ shift
+ done
+ ;;
+ --bucket-size)
+ bucket_size="$2"
+ shift 2
+ ;;
+ --block-size)
+ block_size="$2"
+ shift 2
+ ;;
+ --writeback)
+ ARGS+=(--writeback)
+ shift 1
+ ;;
+ --discard)
+ ARGS+=(--discard)
+ shift 1
+ ;;
+ *)
+ echo "WARNING: unknown argument: $1"
+ shift
+ ;;
+ esac
+ done
+
+ # add /dev prefix to device names
+ cdevs=( "${cdevs[@]/#/\/dev\/}" )
+ bdevs=( "${bdevs[@]/#/\/dev\/}" )
+
+ # make-bcache expects empty/cleared devices
+ BCACHE_DEVS+=("${cdevs[@]}" "${bdevs[@]}")
+ _bcache_wipe_devs
+
+ local -a cdevs_args=()
+ for dev in "${cdevs[@]}"; do
+ cdevs_args+=("--cache" "${dev}")
+ done
+
+ local -a bdevs_args=()
+ for dev in "${bdevs[@]}"; do
+ bdevs_args+=("--bdev" "${dev}")
+ done
+
+ local output cmd
+ cmd=(make-bcache \
+ --wipe-bcache \
+ --bucket "${bucket_size}" \
+ --block "${block_size}" \
+ "${cdevs_args[@]}" \
+ "${bdevs_args[@]}" \
+ "${ARGS[@]}")
+
+ output=$("${cmd[@]}" 2>&1)
+ local rc=$?
+ if [[ "${rc}" -ne 0 ]]; then
+ echo "ERROR: make-bcache failed:" >&2
+ echo "$output" >&2
+ return 1
+ fi
+
+ local cset_uuid
+ cset_uuid=$(echo "$output" | awk '/Set UUID:/ {print $3}' | head -n 1)
+ if [[ -z "${cset_uuid}" ]]; then
+ echo "ERROR: Could not extract cset UUID from make-bcache output" >&2
+ return 1
+ fi
+ BCACHE_CSETS+=("${cset_uuid}")
+
+ local -a bdev_uuids
+ mapfile -t bdev_uuids < <(echo "$output" | awk '
+ $1 == "UUID:" { last_uuid = $2 }
+ $1 == "version:" && $2 == "1" { print last_uuid}
+ ')
+
+ udevadm settle
+
+ _bcache_register
+
+ for uuid in "${bdev_uuids[@]}"; do
+ local link found attempt
+
+ link=/dev/bcache/by-uuid/"${uuid}"
+ found=false
+ attempt=0
+
+ while (( attempt < BCACHE_MAX_RETRIES )); do
+ if [[ -L "$link" ]]; then
+ BCACHE_BDEVS+=("${uuid}")
+ found=true
+ break
+ fi
+
+ (( attempt++ ))
+ sleep 1
+ done
+
+ if [[ "$found" == "false" ]]; then
+ echo "WARNING: Could not find device node for UUID ${uuid} after ${BCACHE_MAX_RETRIES}s" >&2
+ fi
+ done
+}
+
+_remove_bcache() {
+ local uuid
+
+ for uuid in "${BCACHE_BDEVS[@]}"; do
+ local dev_path
+
+ dev_path=$(blkid -U "${uuid}")
+ if [ -n "$dev_path" ]; then
+ local dev_name
+
+ dev_name="${dev_path##*/}"
+ if [ -f "/sys/block/${dev_name}/bcache/stop" ] ; then
+ echo 1 > "/sys/block/${dev_name}/bcache/stop"
+ fi
+ fi
+ done
+
+ for uuid in "${BCACHE_CSETS[@]}"; do
+ if [ -f /sys/fs/bcache/"${uuid}"/unregister ] ; then
+ echo 1 > /sys/fs/bcache/"${uuid}"/unregister
+ fi
+ done
+
+ udevadm settle
+
+ local timeout
+ timeout=0
+ for uuid in "${BCACHE_CSETS[@]}"; do
+ while [[ -d "/sys/fs/bcache/${uuid}" ]] && (( timeout < 10 )); do
+ sleep 0.5
+ (( timeout++ ))
+ done
+ done
+
+ _bcache_wipe_devs
+
+ BCACHE_CSETS=()
+ BCACHE_BDEVS=()
+}
+
+_cleanup_bcache() {
+ local cset dev
+
+ shopt -s nullglob
+ for dev in /sys/block/bcache* ; do
+ [ -e "${dev}" ] || continue
+
+ dev=$(basename "${dev}")
+ echo "WARNING: bcache device ${dev} found"
+
+ if [[ -f /sys/block/"${dev}"/bcache/stop ]]; then
+ echo 1 > /sys/block/"${dev}"/bcache/stop 2>/dev/null || true
+ fi
+ done
+
+ for cset in /sys/fs/bcache/*-*-*-*-*; do
+ if [[ -d "${cset}" ]]; then
+ echo "WARNING: Unregistering cset $(basename "${cset}")"
+ echo 1 > "${cset}"/unregister 2>/dev/null || true
+ fi
+ done
+ shopt -u nullglob
+
+ udevadm settle
+
+ local timeout
+ timeout=0
+ for cset in /sys/fs/bcache/*-*-*-*-*; do
+ while [[ -d "${cset}" ]] && (( timeout < 10 )); do
+ sleep 0.5
+ (( timeout++ ))
+ done
+ done
+
+ _bcache_wipe_devs
+
+ BCACHE_DEVS=()
+}
--
2.52.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH blktests v2 2/2] doc: document how to configure bcache tests
2026-01-21 14:36 [PATCH blktests v2 0/2] bcache: add bcache/001 Daniel Wagner
2026-01-21 14:36 ` [PATCH blktests v2 1/2] " Daniel Wagner
@ 2026-01-21 14:36 ` Daniel Wagner
2026-01-22 8:08 ` Shinichiro Kawasaki
1 sibling, 1 reply; 7+ messages in thread
From: Daniel Wagner @ 2026-01-21 14:36 UTC (permalink / raw)
To: Christoph Hellwig, Stephen Zhang, Kent Overstreet, Coly Li,
Shin'ichiro Kawasaki, Johannes Thumshirn, linux-block,
linux-bcache, Daniel Wagner
Add a bcache entry in running-tests which explains how to configure
blktests for the bcache tests.
Signed-off-by: Daniel Wagner <dwagner@suse.de>
---
Documentation/running-tests.md | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/Documentation/running-tests.md b/Documentation/running-tests.md
index f9da042bb3a0..c4abc6767cd0 100644
--- a/Documentation/running-tests.md
+++ b/Documentation/running-tests.md
@@ -189,6 +189,22 @@ THROTL_BLKDEV_TYPES="sdebug" ./check throtl/
THROTL_BLKDEV_TYPES="nullb sdebug" ./check throtl/
```
+### bcache test configuration
+
+The bcache tests require multiple devices to run simultaneously. By default,
+blktests executes each test case iteratively for every individual device listed
+in TEST_DEVS. This standard behavior makes it impossible to pass a group of
+devices into a single test via TEST_DEVS.
+
+To solve this, the TEST_CASE_DEV_ARRAY was introduced. This allows for custom
+device configurations on a per-test basis. For bcache tests, a minimum of three
+devices is required. Configuration Example
+
+Add the following to your configuration to define the devices used for all
+bcache tests:
+
+TEST_CASE_DEV_ARRAY[bcache/*]="/dev/nvme0n1 /dev/vdb /dev/vdc"
+
### Normal user
To run test cases which require normal user privilege, prepare a user and
--
2.52.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH blktests v2 1/2] bcache: add bcache/001
2026-01-21 14:36 ` [PATCH blktests v2 1/2] " Daniel Wagner
@ 2026-01-21 16:08 ` Daniel Wagner
2026-01-22 8:05 ` Shinichiro Kawasaki
1 sibling, 0 replies; 7+ messages in thread
From: Daniel Wagner @ 2026-01-21 16:08 UTC (permalink / raw)
To: Christoph Hellwig, Stephen Zhang, Kent Overstreet, Coly Li,
Shin'ichiro Kawasaki, Johannes Thumshirn, linux-block,
linux-bcache
> +declare -a BCACHE_DEVS=()
> +declare -a BCACHE_BDEVS=()
> +declare -a BCACHE_CSETS=()
I've change _create_bcache so it returns the bcache device, thus I can
hand those into the _remove_bcache function thus no need for globals
anymore. So don't spend too much in reviewing this :)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH blktests v2 1/2] bcache: add bcache/001
2026-01-21 14:36 ` [PATCH blktests v2 1/2] " Daniel Wagner
2026-01-21 16:08 ` Daniel Wagner
@ 2026-01-22 8:05 ` Shinichiro Kawasaki
2026-01-22 10:28 ` Daniel Wagner
1 sibling, 1 reply; 7+ messages in thread
From: Shinichiro Kawasaki @ 2026-01-22 8:05 UTC (permalink / raw)
To: Daniel Wagner
Cc: hch@infradead.org, Stephen Zhang, Kent Overstreet, Coly Li,
Johannes Thumshirn, linux-block@vger.kernel.org,
linux-bcache@vger.kernel.org
Hi Daniel, thank you for working on this. It's great to extend the test
coverage :)
Please find my review comments in line. I ran the new test case, and observed a
failure. I noted my findings about the failure as one of the inline comments.
On Jan 21, 2026 / 15:36, Daniel Wagner wrote:
> So far we are missing tests for bcache. Besides a relative simple
> setup/teardown tests add also the corresponding infrastructure. More
> tests are to be expected to depend on this.
>
> _create_bcache/_remove_bcache are tracking the resources and if anything
> is missing it will complain.
>
> Signed-off-by: Daniel Wagner <dwagner@suse.de>
> ---
> tests/bcache/001 | 32 +++++++
> tests/bcache/001.out | 1 +
> tests/bcache/rc | 259 +++++++++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 292 insertions(+)
>
> diff --git a/tests/bcache/001 b/tests/bcache/001
> new file mode 100644
To be consistent with other test case files, I suggest to set the file mode 755.
> index 000000000000..4a6e01113b6b
> --- /dev/null
> +++ b/tests/bcache/001
> @@ -0,0 +1,32 @@
> +#!/bin/bash
> +# SPDX-License-Identifier: GPL-2.0
Most of blktests files specifies GPL-3.0+. If there is no specific reason,
I suggest GPL-3.0+. Same comment for tests/bcache/rc.
> +# Copyright (C) 2026 Daniel Wagner, SUSE Labs
> +#
> +# Test bcache setup and teardown
> +
> +. tests/bcache/rc
> +
> +DESCRIPTION="test bcache setup and teardown"
> +
> +requires() {
> + _bcache_requires
> +}
Do you foresee that all bcache/* test cases have this _bcache_requires call?
If so, I suggest to,
- rename _bcache_requires() in tests/bcache/rc to group_requires(), and
- drop the three lines above from bcache/001.
group_requires() is called once before start testing this bcache group.
> +
> +test_device_array() {
> + echo "Running ${TEST_NAME}"
> +
> + if [[ ${#TEST_DEV_ARRAY[@]} -lt 3 ]]; then
> + SKIP_REASONS+=("requires at least 3 devices")
> + return 1
> + fi
> +
> + _create_bcache \
> + --cache "${TEST_DEV_ARRAY[0]##*/}" \
> + --bdev "${TEST_DEV_ARRAY[1]##*/}"
> + _remove_bcache
> +
> + _create_bcache \
> + --cache "${TEST_DEV_ARRAY[0]##*/}" \
> + --bdev "${TEST_DEV_ARRAY[1]##*/}" "${TEST_DEV_ARRAY[2]##*/}"
> + _remove_bcache
> +}
> diff --git a/tests/bcache/001.out b/tests/bcache/001.out
> new file mode 100644
> index 000000000000..f890aed2736c
> --- /dev/null
> +++ b/tests/bcache/001.out
> @@ -0,0 +1 @@
> +Running bcache/001
> diff --git a/tests/bcache/rc b/tests/bcache/rc
> new file mode 100644
> index 000000000000..3dba6c85b1ee
> --- /dev/null
> +++ b/tests/bcache/rc
> @@ -0,0 +1,259 @@
> +#!/bin/bash
> +# SPDX-License-Identifier: GPL-2.0
> +# Copyright (C) 2026 Daniel Wagner, SUSE Labs
> +
> +. common/rc
> +
> +declare -a BCACHE_DEVS=()
> +declare -a BCACHE_BDEVS=()
> +declare -a BCACHE_CSETS=()
> +
> +BCACHE_MAX_RETRIES=5
> +
> +_bcache_requires() {
> + _have_kernel_options MD BCACHE BCACHE_DEBUG AUTOFS_FS
> + _have_program make-bcache
> + _have_crypto_algorithm crc32c
> +}
As noted above, you may want to rename _bcache_requires() to group_requires().
> +
> +_bcache_wipe_devs() {
Nit: "local dev" can be added here.
> + for dev in "${BCACHE_DEVS[@]}"; do
> + # Attempt a clean wipe first
> + if wipefs --all --quiet "${dev}" 2>/dev/null; then
> + continue
> + fi
> +
> + # Overwrite the first 10MB to clear stubborn partition tables or metadata
> + if ! dd if=/dev/zero of="${dev}" bs=1M count=10 conv=notrunc status=none; then
> + echo "Error: dd failed on ${dev}" >&2
> + fi
> +
> + # Try wiping again after clearing the headers
> + if ! wipefs --all --quiet --force "${dev}"; then
> + echo "Warning: Failed to wipe ${dev} even after dd." >&2
> + fi
> + done
> +}
> +
> +_bcache_register() {
When I did trial run, the test case bcache/001 failed with the message below.
bcache/001 => nvme1n1 nvme2n1 nvme3n1 nvme4n1 (test bcache setup and teardown) [failed]
runtime 8.406s ... 12.789s
--- tests/bcache/001.out 2026-01-22 13:55:00.510094421 +0900
+++ /home/shin/Blktests/blktests/results/nvme1n1_nvme2n1_nvme3n1_nvme4n1/bcache/001.out.bad 2026-01-22 16:21:47.686976578 +0900
@@ -1 +1,3 @@
Running bcache/001
+ERROR: bcache registration interface not found.
+WARNING: Could not find device node for UUID b854b766-8e08-42b5-b7cc-e137fd78ca51 after 5s
I found that when the bcache driver is loadable and not yet loaded, the
make-bcache command call loads the driver. However, it takes time for
kernel to prepare /sys/fs/bcache/register. Hence the error message above.
I suggest to add a hunk below here, to wait for the register file get ready.
With this change, the failure disappeared in my test environment.
local dev err_msg timeout=0
while [[ ! -w /sys/fs/bcache/register ]] && (( timeout < 10 )); do
sleep 1
(( timeout ++ ))
done
> + if [[ ! -w /sys/fs/bcache/register ]]; then
> + echo "ERROR: bcache registration interface not found." >&2
> + return 1
> + fi
> +
> + for dev in "${BCACHE_DEVS[@]}"; do
> + if ! echo "${dev}" > /sys/fs/bcache/register 2>/tmp/bcache_err; then
> + err_msg=$(< /tmp/bcache_err)
> +
> + if [[ "${err_msg}" != *"Device or resource busy"* ]]; then
> + echo "ERROR: Failed to register ${dev}: ${err_msg:-"Unknown error"}" >&2
> + fi
> + fi
> + done
> +}
> +
> +_create_bcache() {
> + local -a cdevs=()
> + local -a bdevs=()
> + local -a ARGS=()
> + local bucket_size="64k"
> + local block_size="4k"
> +
> + _register_test_cleanup _cleanup_bcache
> +
> + while [[ $# -gt 0 ]]; do
> + case $1 in
> + --cache)
> + shift
> + # Collect arguments until the next flag or end of input
> + while [[ $# -gt 0 && ! $1 =~ ^-- ]]; do
> + cdevs+=("$1")
> + shift
> + done
> + ;;
> + --bdev)
> + shift
> + # Collect arguments until the next flag or end of input
> + while [[ $# -gt 0 && ! $1 =~ ^-- ]]; do
> + bdevs+=("$1")
> + shift
> + done
> + ;;
> + --bucket-size)
> + bucket_size="$2"
> + shift 2
> + ;;
> + --block-size)
> + block_size="$2"
> + shift 2
> + ;;
> + --writeback)
> + ARGS+=(--writeback)
> + shift 1
> + ;;
> + --discard)
> + ARGS+=(--discard)
> + shift 1
> + ;;
> + *)
> + echo "WARNING: unknown argument: $1"
> + shift
> + ;;
> + esac
> + done
> +
> + # add /dev prefix to device names
> + cdevs=( "${cdevs[@]/#/\/dev\/}" )
> + bdevs=( "${bdevs[@]/#/\/dev\/}" )
> +
> + # make-bcache expects empty/cleared devices
> + BCACHE_DEVS+=("${cdevs[@]}" "${bdevs[@]}")
> + _bcache_wipe_devs
> +
> + local -a cdevs_args=()
Nit: "local dev" can be added here.
> + for dev in "${cdevs[@]}"; do
> + cdevs_args+=("--cache" "${dev}")
> + done
> +
> + local -a bdevs_args=()
> + for dev in "${bdevs[@]}"; do
> + bdevs_args+=("--bdev" "${dev}")
> + done
> +
> + local output cmd
"cmd" is array, then, I guess you meant,
local -a cmd
local output
> + cmd=(make-bcache \
> + --wipe-bcache \
> + --bucket "${bucket_size}" \
> + --block "${block_size}" \
> + "${cdevs_args[@]}" \
> + "${bdevs_args[@]}" \
> + "${ARGS[@]}")
> +
> + output=$("${cmd[@]}" 2>&1)
> + local rc=$?
> + if [[ "${rc}" -ne 0 ]]; then
> + echo "ERROR: make-bcache failed:" >&2
> + echo "$output" >&2
> + return 1
> + fi
> +
> + local cset_uuid
> + cset_uuid=$(echo "$output" | awk '/Set UUID:/ {print $3}' | head -n 1)
> + if [[ -z "${cset_uuid}" ]]; then
> + echo "ERROR: Could not extract cset UUID from make-bcache output" >&2
> + return 1
> + fi
> + BCACHE_CSETS+=("${cset_uuid}")
> +
> + local -a bdev_uuids
> + mapfile -t bdev_uuids < <(echo "$output" | awk '
> + $1 == "UUID:" { last_uuid = $2 }
> + $1 == "version:" && $2 == "1" { print last_uuid}
> + ')
> +
> + udevadm settle
> +
> + _bcache_register
> +
> + for uuid in "${bdev_uuids[@]}"; do
> + local link found attempt
> +
> + link=/dev/bcache/by-uuid/"${uuid}"
> + found=false
> + attempt=0
> +
> + while (( attempt < BCACHE_MAX_RETRIES )); do
> + if [[ -L "$link" ]]; then
> + BCACHE_BDEVS+=("${uuid}")
> + found=true
> + break
> + fi
> +
> + (( attempt++ ))
> + sleep 1
> + done
> +
> + if [[ "$found" == "false" ]]; then
> + echo "WARNING: Could not find device node for UUID ${uuid} after ${BCACHE_MAX_RETRIES}s" >&2
Two spaces are used for the indent above.
> + fi
> + done
> +}
> +
> +_remove_bcache() {
> + local uuid
> +
> + for uuid in "${BCACHE_BDEVS[@]}"; do
> + local dev_path
> +
> + dev_path=$(blkid -U "${uuid}")
> + if [ -n "$dev_path" ]; then
> + local dev_name
> +
> + dev_name="${dev_path##*/}"
> + if [ -f "/sys/block/${dev_name}/bcache/stop" ] ; then
> + echo 1 > "/sys/block/${dev_name}/bcache/stop"
Same here.
> + fi
> + fi
> + done
> +
> + for uuid in "${BCACHE_CSETS[@]}"; do
> + if [ -f /sys/fs/bcache/"${uuid}"/unregister ] ; then
> + echo 1 > /sys/fs/bcache/"${uuid}"/unregister
Same here.
> + fi
> + done
> +
> + udevadm settle
> +
> + local timeout
> + timeout=0
> + for uuid in "${BCACHE_CSETS[@]}"; do
> + while [[ -d "/sys/fs/bcache/${uuid}" ]] && (( timeout < 10 )); do
> + sleep 0.5
> + (( timeout++ ))
> + done
> + done
> +
> + _bcache_wipe_devs
> +
> + BCACHE_CSETS=()
> + BCACHE_BDEVS=()
> +}
> +
> +_cleanup_bcache() {
> + local cset dev
> +
> + shopt -s nullglob
> + for dev in /sys/block/bcache* ; do
> + [ -e "${dev}" ] || continue
> +
> + dev=$(basename "${dev}")
> + echo "WARNING: bcache device ${dev} found"
> +
> + if [[ -f /sys/block/"${dev}"/bcache/stop ]]; then
> + echo 1 > /sys/block/"${dev}"/bcache/stop 2>/dev/null || true
Do we need "|| true" here?
> + fi
> + done
> +
> + for cset in /sys/fs/bcache/*-*-*-*-*; do
> + if [[ -d "${cset}" ]]; then
> + echo "WARNING: Unregistering cset $(basename "${cset}")"
> + echo 1 > "${cset}"/unregister 2>/dev/null || true
Same here.
> + fi
> + done
> + shopt -u nullglob
> +
> + udevadm settle
> +
> + local timeout
> + timeout=0
> + for cset in /sys/fs/bcache/*-*-*-*-*; do
> + while [[ -d "${cset}" ]] && (( timeout < 10 )); do
> + sleep 0.5
> + (( timeout++ ))
> + done
> + done
> +
> + _bcache_wipe_devs
> +
> + BCACHE_DEVS=()
> +}
>
> --
> 2.52.0
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH blktests v2 2/2] doc: document how to configure bcache tests
2026-01-21 14:36 ` [PATCH blktests v2 2/2] doc: document how to configure bcache tests Daniel Wagner
@ 2026-01-22 8:08 ` Shinichiro Kawasaki
0 siblings, 0 replies; 7+ messages in thread
From: Shinichiro Kawasaki @ 2026-01-22 8:08 UTC (permalink / raw)
To: Daniel Wagner
Cc: hch@infradead.org, Stephen Zhang, Kent Overstreet, Coly Li,
Johannes Thumshirn, linux-block@vger.kernel.org,
linux-bcache@vger.kernel.org
On Jan 21, 2026 / 15:36, Daniel Wagner wrote:
> Add a bcache entry in running-tests which explains how to configure
> blktests for the bcache tests.
>
> Signed-off-by: Daniel Wagner <dwagner@suse.de>
> ---
> Documentation/running-tests.md | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
> diff --git a/Documentation/running-tests.md b/Documentation/running-tests.md
> index f9da042bb3a0..c4abc6767cd0 100644
> --- a/Documentation/running-tests.md
> +++ b/Documentation/running-tests.md
> @@ -189,6 +189,22 @@ THROTL_BLKDEV_TYPES="sdebug" ./check throtl/
> THROTL_BLKDEV_TYPES="nullb sdebug" ./check throtl/
> ```
>
> +### bcache test configuration
> +
> +The bcache tests require multiple devices to run simultaneously. By default,
> +blktests executes each test case iteratively for every individual device listed
> +in TEST_DEVS. This standard behavior makes it impossible to pass a group of
> +devices into a single test via TEST_DEVS.
> +
> +To solve this, the TEST_CASE_DEV_ARRAY was introduced. This allows for custom
> +device configurations on a per-test basis. For bcache tests, a minimum of three
> +devices is required. Configuration Example
> +
> +Add the following to your configuration to define the devices used for all
> +bcache tests:
As you noted in the cover letter, TEST_CASE_DEV_ARRAY is already described in
running-tests.md. So I think this section can be some more concise. My quick
rewrite is as follows. If it makes sense for you, please consider to pick it up.
### Bcache test configuration
The bcache tests require multiple devices to run simultaneously. By default,
blktests run each test case for each device in TEST_DEVS. This behavior
prevents testing with multiple devices. The TEST_CASE_DEV_ARRAY resolves this by
enabling multiple device configurations per test. Bcache tests need at
least three devices, which can be specified in your configuration as follows:
> +
> +TEST_CASE_DEV_ARRAY[bcache/*]="/dev/nvme0n1 /dev/vdb /dev/vdc"
> +
> ### Normal user
>
> To run test cases which require normal user privilege, prepare a user and
>
> --
> 2.52.0
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH blktests v2 1/2] bcache: add bcache/001
2026-01-22 8:05 ` Shinichiro Kawasaki
@ 2026-01-22 10:28 ` Daniel Wagner
0 siblings, 0 replies; 7+ messages in thread
From: Daniel Wagner @ 2026-01-22 10:28 UTC (permalink / raw)
To: Shinichiro Kawasaki
Cc: hch@infradead.org, Stephen Zhang, Kent Overstreet, Coly Li,
Johannes Thumshirn, linux-block@vger.kernel.org,
linux-bcache@vger.kernel.org
On Thu, Jan 22, 2026 at 08:05:04AM +0000, Shinichiro Kawasaki wrote:
> Hi Daniel, thank you for working on this. It's great to extend the test
> coverage :)
>
> Please find my review comments in line. I ran the new test case, and observed a
> failure. I noted my findings about the failure as one of the inline
> comments.
Ah sorry, saw your response after sending out v3. I'll work on v4 after
I am back from my vacation.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-01-22 10:28 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-21 14:36 [PATCH blktests v2 0/2] bcache: add bcache/001 Daniel Wagner
2026-01-21 14:36 ` [PATCH blktests v2 1/2] " Daniel Wagner
2026-01-21 16:08 ` Daniel Wagner
2026-01-22 8:05 ` Shinichiro Kawasaki
2026-01-22 10:28 ` Daniel Wagner
2026-01-21 14:36 ` [PATCH blktests v2 2/2] doc: document how to configure bcache tests Daniel Wagner
2026-01-22 8:08 ` Shinichiro Kawasaki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox