* [PATCH blktests 0/7] Further stacked device atomic writes testing
@ 2025-09-12 9:57 John Garry
2025-09-12 9:57 ` [PATCH blktests 1/7] common/rc: add _min() John Garry
` (7 more replies)
0 siblings, 8 replies; 25+ messages in thread
From: John Garry @ 2025-09-12 9:57 UTC (permalink / raw)
To: linux-block, shinichiro.kawasaki; +Cc: John Garry
The testing of atomic writes support for stacked devices is limited.
We only test scsi_debug and for a limited sets of personalities.
Extend to test NVMe and also extend to the following stacked device
personalities:
- dm-linear
- dm-stripe
- dm-mirror
Also add more strict atomic writes limits testing.
John Garry (7):
common/rc: add _min()
md/rc: add _md_atomics_test
md/002: convert to use _md_atomics_test
md/003: add NVMe atomic write tests for stacked devices
md/rc: test atomic writes for dm-linear
md/rc: test atomic writes for dm-stripe
md/rc: test atomic writes for dm-mirror
common/rc | 11 ++
tests/md/002 | 213 +----------------------
tests/md/002.out | 238 ++++++++++++++++++++-----
tests/md/003 | 52 ++++++
tests/md/003.out | 1 +
tests/md/rc | 441 +++++++++++++++++++++++++++++++++++++++++++++++
6 files changed, 705 insertions(+), 251 deletions(-)
create mode 100755 tests/md/003
create mode 120000 tests/md/003.out
--
2.43.5
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH blktests 1/7] common/rc: add _min()
2025-09-12 9:57 [PATCH blktests 0/7] Further stacked device atomic writes testing John Garry
@ 2025-09-12 9:57 ` John Garry
2025-09-18 4:08 ` Shinichiro Kawasaki
2025-09-12 9:57 ` [PATCH blktests 2/7] md/rc: add _md_atomics_test John Garry
` (6 subsequent siblings)
7 siblings, 1 reply; 25+ messages in thread
From: John Garry @ 2025-09-12 9:57 UTC (permalink / raw)
To: linux-block, shinichiro.kawasaki; +Cc: John Garry
Add a helper to find the minimum of two numbers.
A similar helper is being added in xfstests:
https://lore.kernel.org/linux-xfs/cover.1755849134.git.ojaswin@linux.ibm.com/T/#m962683d8115979e57342d2644660230ee978c803
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
common/rc | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/common/rc b/common/rc
index 946dee1..77a0f45 100644
--- a/common/rc
+++ b/common/rc
@@ -700,3 +700,14 @@ _real_dev()
fi
echo "$dev"
}
+
+_min() {
+ local ret
+
+ for arg in "$@"; do
+ if [ -z "$ret" ] || (( $arg < $ret )); then
+ ret="$arg"
+ fi
+ done
+ echo $ret
+}
\ No newline at end of file
--
2.43.5
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH blktests 2/7] md/rc: add _md_atomics_test
2025-09-12 9:57 [PATCH blktests 0/7] Further stacked device atomic writes testing John Garry
2025-09-12 9:57 ` [PATCH blktests 1/7] common/rc: add _min() John Garry
@ 2025-09-12 9:57 ` John Garry
2025-09-18 4:17 ` Shinichiro Kawasaki
2025-09-12 9:57 ` [PATCH blktests 3/7] md/002: convert to use _md_atomics_test John Garry
` (5 subsequent siblings)
7 siblings, 1 reply; 25+ messages in thread
From: John Garry @ 2025-09-12 9:57 UTC (permalink / raw)
To: linux-block, shinichiro.kawasaki; +Cc: John Garry
The stacked device atomic writes testing is currently limited.
md/002 currently only tests scsi_debug. SCSI does not support atomic
boundaries, so it would be nice to test NVMe (which does support them).
Furthermore, the testing in md/002 for chunk boundaries is very limited,
in that we test once one boundary value. Indeed, for RAID0 and RAID10, a
boundary should always be set for testing.
Finally, md/002 only tests md RAID0/1/10. In future we will also want to
test the following stacked device personalities which support atomic
writes:
- md-linear (being upstreamed)
- dm-linear
- dm-stripe
- dm-mirror
To solve all those problems, add a generic test handler,
_md_atomics_test(). This can be extended for more extensive testing.
This test handler will accept a group of devices and test as follows:
a. calculate expected atomic write limits based on device limits
b. Take results from a., and refine expected limits based on any chunk
size
c. loop through creating a stacked device for different chunk size. We loop
once for any personality which does not have a chunk size, e.g. RAID1
d. test sysfs and statx limits vs what is calculated in a. and b.
e. test RWF_ATOMIC is accepted or rejected as expected
Steps c, d, and e are really same as md/002.
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
tests/md/rc | 372 ++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 372 insertions(+)
diff --git a/tests/md/rc b/tests/md/rc
index 96bcd97..105d283 100644
--- a/tests/md/rc
+++ b/tests/md/rc
@@ -5,9 +5,381 @@
# Tests for md raid
. common/rc
+. common/xfs
group_requires() {
+ _have_kver 6 14 0
_have_root
_have_program mdadm
+ _have_xfs_io_atomic_write
+ _have_driver raid0
+ _have_driver raid1
+ _have_driver raid10
_have_driver md-mod
}
+
+declare -A MD_DEVICES
+
+_max_pow_of_two_factor() {
+ part1=$1
+ part2=-$1
+ retval=$(($part1 & $part2))
+ echo "$retval"
+}
+
+# Find max atomic size given a boundary and chunk size
+# @unit is set if we want atomic write "unit" size, i.e power-of-2
+# @chunk must be > 0
+_md_atomics_boundaries_max() {
+ boundary=$1
+ chunk=$2
+ unit=$3
+
+ if [ "$boundary" -eq 0 ]
+ then
+ if [ "$unit" -eq 1 ]
+ then
+ retval=$(_max_pow_of_two_factor $chunk)
+ echo "$retval"
+ return 1
+ fi
+
+ echo "$chunk"
+ return 1
+ fi
+
+ # boundary is always a power-of-2
+ if [ "$boundary" -eq "$chunk" ]
+ then
+ echo "$boundary"
+ return 1
+ fi
+
+ if [ "$boundary" -gt "$chunk" ]
+ then
+ if (( $boundary % $chunk == 0))
+ then
+ if [ "$unit" -eq 1 ]
+ then
+ retval=$(_max_pow_of_two_factor $chunk)
+ echo "$retval"
+ return 1
+ fi
+ echo "$chunk"
+ return 1
+ fi
+ echo "0"
+ return 1
+ fi
+
+ if (( $chunk % $boundary == 0))
+ then
+ echo "$boundary"
+ return 1
+ fi
+
+ echo "0"
+}
+
+_md_atomics_test() {
+ local md_atomic_unit_max
+ local md_atomic_unit_min
+ local md_sysfs_max_hw_sectors_kb
+ local md_sysfs_max_hw
+ local md_chunk_size
+ local sysfs_logical_block_size
+ local sysfs_atomic_write_max
+ local sysfs_atomic_write_unit_min
+ local sysfs_atomic_write_unit_max
+ local bytes_to_write
+ local bytes_written
+ local test_desc
+ local md_dev
+ local md_dev_sysfs
+ local raw_atomic_write_unit_min
+ local raw_atomic_write_unit_max
+ local raw_atomic_write_max
+ local raw_atomic_write_boundary
+ local raw_atomic_write_supported=1
+
+ dev0=$1
+ dev1=$2
+ dev2=$3
+ dev3=$4
+ unset MD_DEVICES
+ MD_DEVICES=($dev0 $dev1 $dev2 $dev3);
+
+ # Calculate what we expect the atomic write limits to be
+ # Don't consider any chunk size at this stage
+ # Use the limits from the first device and then loop again to find
+ # lowest common supported
+ raw_atomic_write_unit_min=$(< /sys/block/"$dev0"/queue/atomic_write_unit_min_bytes);
+ raw_atomic_write_unit_max=$(< /sys/block/"$dev0"/queue/atomic_write_unit_max_bytes);
+ raw_atomic_write_max=$(< /sys/block/"$dev0"/queue/atomic_write_max_bytes);
+ raw_atomic_write_boundary=$(< /sys/block/"$dev0"/queue/atomic_write_boundary_bytes);
+
+ for i in "${MD_DEVICES[@]}"; do
+ if [[ $(< /sys/block/"$i"/queue/atomic_write_unit_min_bytes) -gt raw_atomic_write_unit_min ]]; then
+ raw_atomic_write_unit_min=$(< /sys/block/"$i"/queue/atomic_write_unit_min_bytes)
+ fi
+ if [[ $(< /sys/block/"$i"/queue/atomic_write_unit_max_bytes) -lt raw_atomic_write_unit_max ]]; then
+ raw_atomic_write_unit_max=$(< /sys/block/"$i"/queue/atomic_write_unit_max_bytes)
+ fi
+ if [[ $(< /sys/block/"$i"/queue/atomic_write_max_bytes) -lt raw_atomic_write_max ]]; then
+ raw_atomic_write_max=$(< /sys/block/"$i"/queue/atomic_write_max_bytes)
+ fi
+ # The kernel only supports same boundary size for all devices in the array
+ if [[ $(< /sys/block/"$i"/queue/atomic_write_boundary_bytes) -ne raw_atomic_write_boundary ]]; then
+ let raw_atomic_write_supported=0;
+ fi
+ done
+
+ # Check if we can support atomic writes for the array of devices given.
+ # If we cannot, then it is still worth trying to test that atomic
+ # writes don't work (as we would expect).
+
+ if [[ raw_atomic_write_supported -eq 0 ]]; then
+ let raw_atomic_write_unit_min=0;
+ let raw_atomic_write_unit_max=0;
+ let raw_atomic_write_max=0;
+ let raw_atomic_write_boundary=0;
+ fi
+
+ for personality in raid0 raid1 raid10; do
+ if [ "$personality" = raid0 ] || [ "$personality" = raid10 ]
+ then
+ step_limit=4
+ else
+ step_limit=1
+ fi
+ chunk_gran=$(( "$raw_atomic_write_unit_max" / 2))
+ if [ "$chunk_gran" -lt 4096 ]
+ then
+ let chunk_gran=4096
+ fi
+
+ local chunk_multiple=1
+ for step in `seq 1 $step_limit`
+ do
+ local expected_atomic_write_unit_min
+ local expected_atomic_write_unit_max
+ local expected_atomic_write_max
+ local expected_atomic_write_boundary
+
+ # only raid0 does not require a power-of-2 chunk size
+ if [ "$personality" = raid0 ]
+ then
+ chunk_multiple=$step
+ else
+ chunk_multiple=$(( 2 * "$chunk_multiple"))
+ fi
+ md_chunk_size=$(( "$chunk_gran" * "$chunk_multiple"))
+ md_chunk_size_kb=$(( "$md_chunk_size" / 1024))
+
+ # We may reassign these for RAID0/10
+ let expected_atomic_write_unit_min=$raw_atomic_write_unit_min
+ let expected_atomic_write_unit_max=$raw_atomic_write_unit_max
+ let expected_atomic_write_max=$raw_atomic_write_max
+ let expected_atomic_write_boundary=$raw_atomic_write_boundary
+
+ if [ "$personality" = raid0 ] || [ "$personality" = raid10 ]
+ then
+ echo y | mdadm --create /dev/md/blktests_md --level=$personality \
+ --chunk="${md_chunk_size_kb}"K \
+ --raid-devices=4 --force /dev/"${dev0}" /dev/"${dev1}" \
+ /dev/"${dev2}" /dev/"${dev3}" 2> /dev/null 1>&2
+
+ atomics_boundaries_unit_max=$(_md_atomics_boundaries_max $raw_atomic_write_boundary $md_chunk_size "1")
+ atomics_boundaries_max=$(_md_atomics_boundaries_max $raw_atomic_write_boundary $md_chunk_size "0")
+ expected_atomic_write_unit_min=$(_min $expected_atomic_write_unit_min $atomics_boundaries_unit_max)
+ expected_atomic_write_unit_max=$(_min $expected_atomic_write_unit_max $atomics_boundaries_unit_max)
+ expected_atomic_write_max=$(_min $expected_atomic_write_max $atomics_boundaries_max)
+ if [ "$atomics_boundaries_max" -eq 0 ]
+ then
+ expected_atomic_write_boundary=0
+ fi
+ md_dev=$(readlink /dev/md/blktests_md | sed 's|\.\./||')
+ fi
+
+ if [ "$personality" = raid1 ]
+ then
+ echo y | mdadm --create /dev/md/blktests_md --level=$personality \
+ --raid-devices=4 --force /dev/"${dev0}" /dev/"${dev1}" \
+ /dev/"${dev2}" /dev/"${dev3}" 2> /dev/null 1>&2
+
+ md_dev=$(readlink /dev/md/blktests_md | sed 's|\.\./||')
+ fi
+
+ md_dev_sysfs="/sys/devices/virtual/block/${md_dev}"
+
+ sysfs_logical_block_size=$(< "${md_dev_sysfs}"/queue/logical_block_size)
+ md_sysfs_max_hw_sectors_kb=$(< "${md_dev_sysfs}"/queue/max_hw_sectors_kb)
+ md_sysfs_max_hw=$(( "$md_sysfs_max_hw_sectors_kb" * 1024 ))
+ sysfs_atomic_write_max=$(< "${md_dev_sysfs}"/queue/atomic_write_max_bytes)
+ sysfs_atomic_write_unit_max=$(< "${md_dev_sysfs}"/queue/atomic_write_unit_max_bytes)
+ sysfs_atomic_write_unit_min=$(< "${md_dev_sysfs}"/queue/atomic_write_unit_min_bytes)
+ sysfs_atomic_write_boundary=$(< "${md_dev_sysfs}"/queue/atomic_write_boundary_bytes)
+
+ test_desc="TEST 1 $personality step $step - Verify md sysfs atomic attributes matches"
+ if [ "$sysfs_atomic_write_unit_min" = "$expected_atomic_write_unit_min" ] &&
+ [ "$sysfs_atomic_write_unit_max" = "$expected_atomic_write_unit_max" ]
+ then
+ echo "$test_desc - pass"
+ else
+ echo "$test_desc - fail sysfs_atomic_write_unit_min="$sysfs_atomic_write_unit_min \
+ "expected_atomic_write_unit_min="$expected_atomic_write_unit_min \
+ "sysfs_atomic_write_unit_max="$sysfs_atomic_write_unit_max \
+ "expected_atomic_write_unit_max="$expected_atomic_write_unit_max \
+ "md_chunk_size="$md_chunk_size
+ fi
+
+ test_desc="TEST 2 $personality step $step - Verify sysfs atomic attributes"
+ if [ "$md_sysfs_max_hw" -ge "$sysfs_atomic_write_max" ] &&
+ [ "$sysfs_atomic_write_unit_max" -ge "$sysfs_atomic_write_unit_min" ] &&
+ [ "$sysfs_atomic_write_max" -ge "$sysfs_atomic_write_unit_max" ]
+ then
+ echo "$test_desc - pass"
+ else
+ echo "$test_desc - fail $md_sysfs_max_hw="$md_sysfs_max_hw \
+ "sysfs_atomic_write_max="$sysfs_atomic_write_max \
+ "sysfs_atomic_write_unit_min="$sysfs_atomic_write_unit_min \
+ "sysfs_atomic_write_unit_max="$sysfs_atomic_write_unit_max \
+ "md_chunk_size="$md_chunk_size
+ fi
+
+ test_desc="TEST 3 $personality step $step - Verify md sysfs_atomic_write_max is equal to "
+ test_desc+="expected_atomic_write_max"
+ if [ "$sysfs_atomic_write_max" -eq "$expected_atomic_write_max" ]
+ then
+ echo "$test_desc - pass"
+ else
+ echo "$test_desc - fail sysfs_atomic_write_max="$sysfs_atomic_write_max \
+ "expected_atomic_write_max="$expected_atomic_write_max \
+ "md_chunk_size="$md_chunk_size
+ fi
+
+ test_desc="TEST 4 $personality step $step - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max"
+ if [ "$sysfs_atomic_write_unit_max" = "$expected_atomic_write_unit_max" ]
+ then
+ echo "$test_desc - pass"
+ else
+ echo "$test_desc - fail sysfs_atomic_write_unit_max="$sysfs_atomic_write_unit_max \
+ "expected_atomic_write_unit_max="$expected_atomic_write_unit_max \
+ "md_chunk_size="$md_chunk_size
+ fi
+
+ test_desc="TEST 5 $personality step $step - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes"
+ if [ "$sysfs_atomic_write_boundary" = "$expected_atomic_write_boundary" ]
+ then
+ echo "$test_desc - pass"
+ else
+ echo "$test_desc - fail sysfs_atomic_write_boundary="$sysfs_atomic_write_boundary \
+ "expected_atomic_write_boundary="$expected_atomic_write_boundary
+ fi
+
+ test_desc="TEST 6 $personality step $step - Verify statx stx_atomic_write_unit_min"
+ statx_atomic_write_unit_min=$(run_xfs_io_xstat /dev/"$md_dev" "stat.atomic_write_unit_min")
+ if [ "$statx_atomic_write_unit_min" = "$sysfs_atomic_write_unit_min" ]
+ then
+ echo "$test_desc - pass"
+ else
+ echo "$test_desc - fail statx_atomic_write_unit_min="$statx_atomic_write_unit_min \
+ "sysfs_atomic_write_unit_min="$sysfs_atomic_write_unit_min \
+ "md_chunk_size="$md_chunk_size
+ fi
+
+ test_desc="TEST 7 $personality step $step - Verify statx stx_atomic_write_unit_max"
+ statx_atomic_write_unit_max=$(run_xfs_io_xstat /dev/"$md_dev" "stat.atomic_write_unit_max")
+ if [ "$statx_atomic_write_unit_max" = "$sysfs_atomic_write_unit_max" ]
+ then
+ echo "$test_desc - pass"
+ else
+ echo "$test_desc - fail statx_atomic_write_unit_max="$statx_atomic_write_unit_max \
+ "sysfs_atomic_write_unit_max="$sysfs_atomic_write_unit_max \
+ "md_chunk_size="$md_chunk_size
+ fi
+
+ test_desc="TEST 8 $personality step $step - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with "
+ test_desc+="RWF_ATOMIC flag - pwritev2 should fail"
+ if [ "$sysfs_atomic_write_unit_max" = 0 ]
+ then
+ echo "$test_desc - pass"
+ else
+ bytes_written=$(run_xfs_io_pwritev2_atomic /dev/"$md_dev" "$sysfs_atomic_write_unit_max")
+ if [ "$bytes_written" = "$sysfs_atomic_write_unit_max" ]
+ then
+ echo "$test_desc - pass"
+ else
+ echo "$test_desc - fail bytes_written="$bytes_written \
+ "sysfs_atomic_write_unit_max="$sysfs_atomic_write_unit_max \
+ "md_chunk_size="$md_chunk_size
+ fi
+ fi
+
+ test_desc="TEST 9 $personality step $step - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS "
+ test_desc+="bytes with RWF_ATOMIC flag - pwritev2 should not be succesful"
+ if [ "$sysfs_atomic_write_unit_max" = 0 ]
+ then
+ echo "pwrite: Invalid argument"
+ echo "$test_desc - pass"
+ else
+ bytes_to_write=$(( "${sysfs_atomic_write_unit_max}" + "${sysfs_logical_block_size}" ))
+ bytes_written=$(run_xfs_io_pwritev2_atomic /dev/"$md_dev" "$bytes_to_write")
+ if [ "$bytes_written" = "" ]
+ then
+ echo "$test_desc - pass"
+ else
+ echo "$test_desc - fail bytes_written="$bytes_written \
+ "bytes_to_write="$bytes_to_write \
+ "sysfs_atomic_write_unit_max="$sysfs_atomic_write_unit_max \
+ "md_chunk_size="$md_chunk_size
+ fi
+ fi
+
+ test_desc="TEST 10 $personality step $step - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes "
+ test_desc+="with RWF_ATOMIC flag - pwritev2 should fail"
+ if [ "$sysfs_atomic_write_unit_min" = 0 ]
+ then
+ echo "$test_desc - pass"
+ else
+ bytes_written=$(run_xfs_io_pwritev2_atomic /dev/"$md_dev" "$sysfs_atomic_write_unit_min")
+ if [ "$bytes_written" = "$sysfs_atomic_write_unit_min" ]
+ then
+ echo "$test_desc - pass"
+ else
+ echo "$test_desc - fail bytes_written="$bytes_written \
+ "sysfs_atomic_write_unit_min="$sysfs_atomic_write_unit_min \
+ "md_chunk_size="$md_chunk_size
+ fi
+ fi
+
+ test_desc="TEST 11 $personality step $step - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS "
+ test_desc+="bytes with RWF_ATOMIC flag - pwritev2 should fail"
+ if [ "${sysfs_atomic_write_unit_max}" -le "${sysfs_logical_block_size}" ]
+ then
+ echo "pwrite: Invalid argument"
+ echo "$test_desc - pass"
+ else
+ bytes_to_write=$(( "${sysfs_atomic_write_unit_max}" - "${sysfs_logical_block_size}" ))
+ bytes_written=$(run_xfs_io_pwritev2_atomic /dev/"$md_dev" "$bytes_to_write")
+ if [ "$bytes_written" = "" ]
+ then
+ echo "$test_desc - pass"
+ else
+ echo "$test_desc - fail bytes_written="$bytes_written \
+ "bytes_to_write="$bytes_to_write \
+ "md_chunk_size="$md_chunk_size
+ fi
+ fi
+
+ if [ "$personality" = raid0 ] || [ "$personality" = raid1 ] || [ "$personality" = raid10 ]
+ then
+ mdadm --stop /dev/md/blktests_md 2> /dev/null 1>&2
+ mdadm --zero-superblock /dev/"${dev0}" 2> /dev/null 1>&2
+ mdadm --zero-superblock /dev/"${dev1}" 2> /dev/null 1>&2
+ mdadm --zero-superblock /dev/"${dev2}" 2> /dev/null 1>&2
+ mdadm --zero-superblock /dev/"${dev3}" 2> /dev/null 1>&2
+ fi
+ done
+ done
+}
--
2.43.5
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH blktests 3/7] md/002: convert to use _md_atomics_test
2025-09-12 9:57 [PATCH blktests 0/7] Further stacked device atomic writes testing John Garry
2025-09-12 9:57 ` [PATCH blktests 1/7] common/rc: add _min() John Garry
2025-09-12 9:57 ` [PATCH blktests 2/7] md/rc: add _md_atomics_test John Garry
@ 2025-09-12 9:57 ` John Garry
2025-09-12 9:57 ` [PATCH blktests 4/7] md/003: add NVMe atomic write tests for stacked devices John Garry
` (4 subsequent siblings)
7 siblings, 0 replies; 25+ messages in thread
From: John Garry @ 2025-09-12 9:57 UTC (permalink / raw)
To: linux-block, shinichiro.kawasaki; +Cc: John Garry
_md_atomics_test does even more testing than 002 does now.
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
tests/md/002 | 210 +----------------------------------------------
tests/md/002.out | 158 ++++++++++++++++++++++++++---------
2 files changed, 119 insertions(+), 249 deletions(-)
diff --git a/tests/md/002 b/tests/md/002
index fdf1e23..990b64b 100755
--- a/tests/md/002
+++ b/tests/md/002
@@ -12,41 +12,10 @@ DESCRIPTION="test md atomic writes"
QUICK=1
requires() {
- _have_kver 6 14 0
- _have_program mdadm
_have_driver scsi_debug
- _have_xfs_io_atomic_write
- _have_driver raid0
- _have_driver raid1
- _have_driver raid10
}
test() {
- local scsi_debug_atomic_wr_max_length
- local scsi_debug_atomic_wr_gran
- local scsi_sysfs_atomic_max_bytes
- local scsi_sysfs_atomic_unit_max_bytes
- local scsi_sysfs_atomic_unit_min_bytes
- local md_atomic_max_bytes
- local md_atomic_min_bytes
- local md_sysfs_max_hw_sectors_kb
- local md_max_hw_bytes
- local md_chunk_size
- local md_chunk_size_bytes
- local md_sysfs_logical_block_size
- local md_sysfs_atomic_max_bytes
- local md_sysfs_atomic_unit_max_bytes
- local md_sysfs_atomic_unit_min_bytes
- local bytes_to_write
- local bytes_written
- local test_desc
- local scsi_0
- local scsi_1
- local scsi_2
- local scsi_3
- local scsi_dev_sysfs
- local md_dev
- local md_dev_sysfs
local scsi_debug_params=(
delay=0
atomic_wr=1
@@ -66,183 +35,8 @@ test() {
scsi_2="${SCSI_DEBUG_DEVICES[2]}"
scsi_3="${SCSI_DEBUG_DEVICES[3]}"
- scsi_dev_sysfs="/sys/block/${scsi_0}"
- scsi_sysfs_atomic_max_bytes=$(< "${scsi_dev_sysfs}"/queue/atomic_write_max_bytes)
- scsi_sysfs_atomic_unit_max_bytes=$(< "${scsi_dev_sysfs}"/queue/atomic_write_unit_max_bytes)
- scsi_sysfs_atomic_unit_min_bytes=$(< "${scsi_dev_sysfs}"/queue/atomic_write_unit_min_bytes)
- scsi_debug_atomic_wr_max_length=$(< /sys/module/scsi_debug/parameters/atomic_wr_max_length)
- scsi_debug_atomic_wr_gran=$(< /sys/module/scsi_debug/parameters/atomic_wr_gran)
-
- for raid_level in 0 1 10; do
- if [ "$raid_level" = 10 ]
- then
- mdadm --create /dev/md/blktests_md --level=$raid_level \
- --raid-devices=4 --force --run /dev/"${scsi_0}" /dev/"${scsi_1}" \
- /dev/"${scsi_2}" /dev/"${scsi_3}" 2> /dev/null 1>&2
- else
- mdadm --create /dev/md/blktests_md --level=$raid_level \
- --raid-devices=2 --force --run \
- /dev/"${scsi_0}" /dev/"${scsi_1}" 2> /dev/null 1>&2
- fi
-
- md_dev=$(readlink /dev/md/blktests_md | sed 's|\.\./||')
- md_dev_sysfs="/sys/devices/virtual/block/${md_dev}"
-
- md_sysfs_logical_block_size=$(< "${md_dev_sysfs}"/queue/logical_block_size)
- md_sysfs_max_hw_sectors_kb=$(< "${md_dev_sysfs}"/queue/max_hw_sectors_kb)
- md_max_hw_bytes=$(( "$md_sysfs_max_hw_sectors_kb" * 1024 ))
- md_sysfs_atomic_max_bytes=$(< "${md_dev_sysfs}"/queue/atomic_write_max_bytes)
- md_sysfs_atomic_unit_max_bytes=$(< "${md_dev_sysfs}"/queue/atomic_write_unit_max_bytes)
- md_sysfs_atomic_unit_min_bytes=$(< "${md_dev_sysfs}"/queue/atomic_write_unit_min_bytes)
- md_atomic_max_bytes=$(( "$scsi_debug_atomic_wr_max_length" * "$md_sysfs_logical_block_size" ))
- md_atomic_min_bytes=$(( "$scsi_debug_atomic_wr_gran" * "$md_sysfs_logical_block_size" ))
-
- test_desc="TEST 1 RAID $raid_level - Verify md sysfs atomic attributes matches scsi"
- if [ "$md_sysfs_atomic_unit_max_bytes" = "$scsi_sysfs_atomic_unit_max_bytes" ] &&
- [ "$md_sysfs_atomic_unit_min_bytes" = "$scsi_sysfs_atomic_unit_min_bytes" ]
- then
- echo "$test_desc - pass"
- else
- echo "$test_desc - fail $md_sysfs_atomic_unit_max_bytes - $scsi_sysfs_atomic_unit_max_bytes -" \
- "$md_sysfs_atomic_unit_min_bytes - $scsi_sysfs_atomic_unit_min_bytes "
- fi
-
- test_desc="TEST 2 RAID $raid_level - Verify sysfs atomic attributes"
- if [ "$md_max_hw_bytes" -ge "$md_sysfs_atomic_max_bytes" ] &&
- [ "$md_sysfs_atomic_max_bytes" -ge "$md_sysfs_atomic_unit_max_bytes" ] &&
- [ "$md_sysfs_atomic_unit_max_bytes" -ge "$md_sysfs_atomic_unit_min_bytes" ]
- then
- echo "$test_desc - pass"
- else
- echo "$test_desc - fail $md_max_hw_bytes - $md_sysfs_max_hw_sectors_kb -" \
- "$md_sysfs_atomic_max_bytes - $md_sysfs_atomic_unit_max_bytes -" \
- "$md_sysfs_atomic_unit_min_bytes"
- fi
-
- test_desc="TEST 3 RAID $raid_level - Verify md sysfs_atomic_max_bytes is less than or equal "
- test_desc+="scsi sysfs_atomic_max_bytes"
- if [ "$md_sysfs_atomic_max_bytes" -le "$scsi_sysfs_atomic_max_bytes" ]
- then
- echo "$test_desc - pass"
- else
- echo "$test_desc - fail $md_sysfs_atomic_max_bytes - $scsi_sysfs_atomic_max_bytes"
- fi
-
- test_desc="TEST 4 RAID $raid_level - check sysfs atomic_write_unit_max_bytes <= scsi_debug atomic_wr_max_length"
- if (("$md_sysfs_atomic_unit_max_bytes" <= "$md_atomic_max_bytes"))
- then
- echo "$test_desc - pass"
- else
- echo "$test_desc - fail $md_sysfs_atomic_unit_max_bytes - $md_atomic_max_bytes"
- fi
-
- test_desc="TEST 5 RAID $raid_level - check sysfs atomic_write_unit_min_bytes = scsi_debug atomic_wr_gran"
- if [ "$md_sysfs_atomic_unit_min_bytes" = "$md_atomic_min_bytes" ]
- then
- echo "$test_desc - pass"
- else
- echo "$test_desc - fail $md_sysfs_atomic_unit_min_bytes - $md_atomic_min_bytes"
- fi
-
- test_desc="TEST 6 RAID $raid_level - check statx stx_atomic_write_unit_min"
- statx_atomic_min=$(run_xfs_io_xstat /dev/"$md_dev" "stat.atomic_write_unit_min")
- if [ "$statx_atomic_min" = "$md_atomic_min_bytes" ]
- then
- echo "$test_desc - pass"
- else
- echo "$test_desc - fail $statx_atomic_min - $md_atomic_min_bytes"
- fi
-
- test_desc="TEST 7 RAID $raid_level - check statx stx_atomic_write_unit_max"
- statx_atomic_max=$(run_xfs_io_xstat /dev/"$md_dev" "stat.atomic_write_unit_max")
- if [ "$statx_atomic_max" = "$md_sysfs_atomic_unit_max_bytes" ]
- then
- echo "$test_desc - pass"
- else
- echo "$test_desc - fail $statx_atomic_max - $md_sysfs_atomic_unit_max_bytes"
- fi
-
- test_desc="TEST 8 RAID $raid_level - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with "
- test_desc+="RWF_ATOMIC flag - pwritev2 should be succesful"
- bytes_written=$(run_xfs_io_pwritev2_atomic /dev/"$md_dev" "$md_sysfs_atomic_unit_max_bytes")
- if [ "$bytes_written" = "$md_sysfs_atomic_unit_max_bytes" ]
- then
- echo "$test_desc - pass"
- else
- echo "$test_desc - fail $bytes_written - $md_sysfs_atomic_unit_max_bytes"
- fi
-
- test_desc="TEST 9 RAID $raid_level - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + 512 "
- test_desc+="bytes with RWF_ATOMIC flag - pwritev2 should not be succesful"
- bytes_to_write=$(( "${md_sysfs_atomic_unit_max_bytes}" + 512 ))
- bytes_written=$(run_xfs_io_pwritev2_atomic /dev/"$md_dev" "$bytes_to_write")
- if [ "$bytes_written" = "" ]
- then
- echo "$test_desc - pass"
- else
- echo "$test_desc - fail $bytes_written - $bytes_to_write"
- fi
-
- test_desc="TEST 10 RAID $raid_level - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes "
- test_desc+="with RWF_ATOMIC flag - pwritev2 should be succesful"
- bytes_written=$(run_xfs_io_pwritev2_atomic /dev/"$md_dev" "$md_sysfs_atomic_unit_min_bytes")
- if [ "$bytes_written" = "$md_sysfs_atomic_unit_min_bytes" ]
- then
- echo "$test_desc - pass"
- else
- echo "$test_desc - fail $bytes_written - $md_atomic_min_bytes"
- fi
-
- bytes_to_write=$(( "${md_sysfs_atomic_unit_min_bytes}" - "${md_sysfs_logical_block_size}" ))
- test_desc="TEST 11 RAID $raid_level - perform a pwritev2 with a size of sysfs_atomic_unit_min_bytes - 512 "
- test_desc+="bytes with RWF_ATOMIC flag - pwritev2 should fail"
- if [ "$bytes_to_write" = 0 ]
- then
- echo "$test_desc - pass"
- else
- bytes_written=$(run_xfs_io_pwritev2_atomic /dev/"$md_dev" "$bytes_to_write")
- if [ "$bytes_written" = "" ]
- then
- echo "$test_desc - pass"
- else
- echo "$test_desc - fail $bytes_written - $bytes_to_write"
- fi
- fi
-
- mdadm --stop /dev/md/blktests_md 2> /dev/null 1>&2
-
- if [ "$raid_level" = 0 ] || [ "$raid_level" = 10 ]
- then
- md_chunk_size=$(( "$scsi_sysfs_atomic_unit_max_bytes" / 2048))
-
- if [ "$raid_level" = 0 ]
- then
- mdadm --create /dev/md/blktests_md --level=$raid_level \
- --raid-devices=2 --chunk="${md_chunk_size}"K --force --run \
- /dev/"${scsi_0}" /dev/"${scsi_1}" 2> /dev/null 1>&2
- else
- mdadm --create /dev/md/blktests_md --level=$raid_level \
- --raid-devices=4 --chunk="${md_chunk_size}"K --force --run \
- /dev/"${scsi_0}" /dev/"${scsi_1}" \
- /dev/"${scsi_2}" /dev/"${scsi_3}" 2> /dev/null 1>&2
- fi
-
- md_dev=$(readlink /dev/md/blktests_md | sed 's|\.\./||')
- md_dev_sysfs="/sys/devices/virtual/block/${md_dev}"
- md_sysfs_atomic_unit_max_bytes=$(< "${md_dev_sysfs}"/queue/atomic_write_unit_max_bytes)
- md_chunk_size_bytes=$(( "$md_chunk_size" * 1024))
- test_desc="TEST 12 RAID $raid_level - Verify chunk size "
- if [ "$md_chunk_size_bytes" -le "$md_sysfs_atomic_unit_max_bytes" ] && \
- (( md_sysfs_atomic_unit_max_bytes % md_chunk_size_bytes == 0 ))
- then
- echo "$test_desc - pass"
- else
- echo "$test_desc - fail $md_chunk_size_bytes - $md_sysfs_atomic_unit_max_bytes"
- fi
-
- mdadm --quiet --stop /dev/md/blktests_md
- fi
- done
+ _md_atomics_test "${SCSI_DEBUG_DEVICES[0]}" "${SCSI_DEBUG_DEVICES[1]}" \
+ "${SCSI_DEBUG_DEVICES[2]}" "${SCSI_DEBUG_DEVICES[3]}"
_exit_scsi_debug
diff --git a/tests/md/002.out b/tests/md/002.out
index 6b0a431..cd34e38 100644
--- a/tests/md/002.out
+++ b/tests/md/002.out
@@ -1,43 +1,119 @@
Running md/002
-TEST 1 RAID 0 - Verify md sysfs atomic attributes matches scsi - pass
-TEST 2 RAID 0 - Verify sysfs atomic attributes - pass
-TEST 3 RAID 0 - Verify md sysfs_atomic_max_bytes is less than or equal scsi sysfs_atomic_max_bytes - pass
-TEST 4 RAID 0 - check sysfs atomic_write_unit_max_bytes <= scsi_debug atomic_wr_max_length - pass
-TEST 5 RAID 0 - check sysfs atomic_write_unit_min_bytes = scsi_debug atomic_wr_gran - pass
-TEST 6 RAID 0 - check statx stx_atomic_write_unit_min - pass
-TEST 7 RAID 0 - check statx stx_atomic_write_unit_max - pass
-TEST 8 RAID 0 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should be succesful - pass
-pwrite: Invalid argument
-TEST 9 RAID 0 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + 512 bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
-TEST 10 RAID 0 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should be succesful - pass
-pwrite: Invalid argument
-TEST 11 RAID 0 - perform a pwritev2 with a size of sysfs_atomic_unit_min_bytes - 512 bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
-TEST 12 RAID 0 - Verify chunk size - pass
-TEST 1 RAID 1 - Verify md sysfs atomic attributes matches scsi - pass
-TEST 2 RAID 1 - Verify sysfs atomic attributes - pass
-TEST 3 RAID 1 - Verify md sysfs_atomic_max_bytes is less than or equal scsi sysfs_atomic_max_bytes - pass
-TEST 4 RAID 1 - check sysfs atomic_write_unit_max_bytes <= scsi_debug atomic_wr_max_length - pass
-TEST 5 RAID 1 - check sysfs atomic_write_unit_min_bytes = scsi_debug atomic_wr_gran - pass
-TEST 6 RAID 1 - check statx stx_atomic_write_unit_min - pass
-TEST 7 RAID 1 - check statx stx_atomic_write_unit_max - pass
-TEST 8 RAID 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should be succesful - pass
-pwrite: Invalid argument
-TEST 9 RAID 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + 512 bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
-TEST 10 RAID 1 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should be succesful - pass
-pwrite: Invalid argument
-TEST 11 RAID 1 - perform a pwritev2 with a size of sysfs_atomic_unit_min_bytes - 512 bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
-TEST 1 RAID 10 - Verify md sysfs atomic attributes matches scsi - pass
-TEST 2 RAID 10 - Verify sysfs atomic attributes - pass
-TEST 3 RAID 10 - Verify md sysfs_atomic_max_bytes is less than or equal scsi sysfs_atomic_max_bytes - pass
-TEST 4 RAID 10 - check sysfs atomic_write_unit_max_bytes <= scsi_debug atomic_wr_max_length - pass
-TEST 5 RAID 10 - check sysfs atomic_write_unit_min_bytes = scsi_debug atomic_wr_gran - pass
-TEST 6 RAID 10 - check statx stx_atomic_write_unit_min - pass
-TEST 7 RAID 10 - check statx stx_atomic_write_unit_max - pass
-TEST 8 RAID 10 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should be succesful - pass
-pwrite: Invalid argument
-TEST 9 RAID 10 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + 512 bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
-TEST 10 RAID 10 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should be succesful - pass
-pwrite: Invalid argument
-TEST 11 RAID 10 - perform a pwritev2 with a size of sysfs_atomic_unit_min_bytes - 512 bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
-TEST 12 RAID 10 - Verify chunk size - pass
+TEST 1 raid0 step 1 - Verify md sysfs atomic attributes matches - pass
+TEST 2 raid0 step 1 - Verify sysfs atomic attributes - pass
+TEST 3 raid0 step 1 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 raid0 step 1 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 raid0 step 1 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 raid0 step 1 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 raid0 step 1 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 raid0 step 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 raid0 step 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 raid0 step 1 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 raid0 step 1 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+TEST 1 raid0 step 2 - Verify md sysfs atomic attributes matches - pass
+TEST 2 raid0 step 2 - Verify sysfs atomic attributes - pass
+TEST 3 raid0 step 2 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 raid0 step 2 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 raid0 step 2 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 raid0 step 2 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 raid0 step 2 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 raid0 step 2 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 raid0 step 2 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 raid0 step 2 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 raid0 step 2 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+TEST 1 raid0 step 3 - Verify md sysfs atomic attributes matches - pass
+TEST 2 raid0 step 3 - Verify sysfs atomic attributes - pass
+TEST 3 raid0 step 3 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 raid0 step 3 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 raid0 step 3 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 raid0 step 3 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 raid0 step 3 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 raid0 step 3 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 raid0 step 3 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 raid0 step 3 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 raid0 step 3 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+TEST 1 raid0 step 4 - Verify md sysfs atomic attributes matches - pass
+TEST 2 raid0 step 4 - Verify sysfs atomic attributes - pass
+TEST 3 raid0 step 4 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 raid0 step 4 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 raid0 step 4 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 raid0 step 4 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 raid0 step 4 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 raid0 step 4 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 raid0 step 4 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 raid0 step 4 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 raid0 step 4 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+TEST 1 raid1 step 1 - Verify md sysfs atomic attributes matches - pass
+TEST 2 raid1 step 1 - Verify sysfs atomic attributes - pass
+TEST 3 raid1 step 1 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 raid1 step 1 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 raid1 step 1 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 raid1 step 1 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 raid1 step 1 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 raid1 step 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 raid1 step 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 raid1 step 1 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 raid1 step 1 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+TEST 1 raid10 step 1 - Verify md sysfs atomic attributes matches - pass
+TEST 2 raid10 step 1 - Verify sysfs atomic attributes - pass
+TEST 3 raid10 step 1 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 raid10 step 1 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 raid10 step 1 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 raid10 step 1 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 raid10 step 1 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 raid10 step 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 raid10 step 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 raid10 step 1 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 raid10 step 1 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+TEST 1 raid10 step 2 - Verify md sysfs atomic attributes matches - pass
+TEST 2 raid10 step 2 - Verify sysfs atomic attributes - pass
+TEST 3 raid10 step 2 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 raid10 step 2 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 raid10 step 2 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 raid10 step 2 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 raid10 step 2 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 raid10 step 2 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 raid10 step 2 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 raid10 step 2 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 raid10 step 2 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+TEST 1 raid10 step 3 - Verify md sysfs atomic attributes matches - pass
+TEST 2 raid10 step 3 - Verify sysfs atomic attributes - pass
+TEST 3 raid10 step 3 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 raid10 step 3 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 raid10 step 3 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 raid10 step 3 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 raid10 step 3 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 raid10 step 3 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 raid10 step 3 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 raid10 step 3 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 raid10 step 3 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+TEST 1 raid10 step 4 - Verify md sysfs atomic attributes matches - pass
+TEST 2 raid10 step 4 - Verify sysfs atomic attributes - pass
+TEST 3 raid10 step 4 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 raid10 step 4 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 raid10 step 4 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 raid10 step 4 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 raid10 step 4 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 raid10 step 4 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 raid10 step 4 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 raid10 step 4 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 raid10 step 4 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
Test complete
--
2.43.5
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH blktests 4/7] md/003: add NVMe atomic write tests for stacked devices
2025-09-12 9:57 [PATCH blktests 0/7] Further stacked device atomic writes testing John Garry
` (2 preceding siblings ...)
2025-09-12 9:57 ` [PATCH blktests 3/7] md/002: convert to use _md_atomics_test John Garry
@ 2025-09-12 9:57 ` John Garry
2025-09-18 4:27 ` Shinichiro Kawasaki
2025-09-12 9:57 ` [PATCH blktests 5/7] md/rc: test atomic writes for dm-linear John Garry
` (3 subsequent siblings)
7 siblings, 1 reply; 25+ messages in thread
From: John Garry @ 2025-09-12 9:57 UTC (permalink / raw)
To: linux-block, shinichiro.kawasaki; +Cc: John Garry
md/002 only tests SCSI via scsi_debug.
It is also useful to test NVMe, so add a specific test for that.
The results for 002 and 003 should be the same, so link them.
_md_atomics_test requires 4x devices with atomics support, so check for
that.
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
tests/md/002 | 2 +-
tests/md/002.out | 2 +-
tests/md/003 | 51 ++++++++++++++++++++++++++++++++++++++++++++++++
tests/md/003.out | 1 +
4 files changed, 54 insertions(+), 2 deletions(-)
create mode 100755 tests/md/003
create mode 120000 tests/md/003.out
diff --git a/tests/md/002 b/tests/md/002
index 990b64b..87b13f2 100755
--- a/tests/md/002
+++ b/tests/md/002
@@ -24,7 +24,7 @@ test() {
per_host_store=true
)
- echo "Running ${TEST_NAME}"
+ echo "Running md_atomics_test"
if ! _configure_scsi_debug "${scsi_debug_params[@]}"; then
return 1
diff --git a/tests/md/002.out b/tests/md/002.out
index cd34e38..b311a50 100644
--- a/tests/md/002.out
+++ b/tests/md/002.out
@@ -1,4 +1,4 @@
-Running md/002
+Running md_atomics_test
TEST 1 raid0 step 1 - Verify md sysfs atomic attributes matches - pass
TEST 2 raid0 step 1 - Verify sysfs atomic attributes - pass
TEST 3 raid0 step 1 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
diff --git a/tests/md/003 b/tests/md/003
new file mode 100755
index 0000000..8128f8d
--- /dev/null
+++ b/tests/md/003
@@ -0,0 +1,51 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-3.0+
+# Copyright (C) 2025 Oracle and/or its affiliates
+#
+# Test NMVe Atomic Writes with MD devices
+
+. tests/nvme/rc
+. common/xfs
+
+DESCRIPTION="test md atomic writes for NVMe drives"
+QUICK=1
+
+requires() {
+ _nvme_requires
+}
+
+test() {
+ local ns
+ local testdev_count=0
+ declare -A NVME_TEST_DEVS
+ declare -A NVME_TEST_DEVS_NAME
+ declare -A NVME_TEST_DEVS_SYSFS
+
+ echo "Running md_atomics_test"
+
+ for i in "${!TEST_DEV_SYSFS_DIRS[@]}"; do
+ TEST_DEV=${TEST_DEV_SYSFS_DIRS[$i]}
+ if readlink -f "$TEST_DEV" | grep -q nvme; then
+ NVME_TEST_DEVS["$testdev_count"]="$i";
+ NVME_TEST_DEVS_SYSFS["$testdev_count"]="$TEST_DEV";
+ NVME_TEST_DEVS_NAME["$testdev_count"]="$(awk '{print substr($1,6) }' <<< $i)"
+ let testdev_count=testdev_count+1;
+ fi
+ done
+
+ for ((i = 0; i < ${#NVME_TEST_DEVS[@]}; ++i)); do
+ TEST_DEV_SYSFS="${NVME_TEST_DEVS_SYSFS[$i]}"
+ TEST_DEV="${NVME_TEST_DEVS[$i]}"
+ _require_device_support_atomic_writes
+ done
+
+ if [[ $testdev_count -lt 4 ]]; then
+ SKIP_REASONS+=("requires at least 4 NVMe devices")
+ return 1
+ fi
+
+ _md_atomics_test "${NVME_TEST_DEVS_NAME[0]}" "${NVME_TEST_DEVS_NAME[1]}" \
+ "${NVME_TEST_DEVS_NAME[2]}" "${NVME_TEST_DEVS_NAME[3]}"
+
+ echo "Test complete"
+}
diff --git a/tests/md/003.out b/tests/md/003.out
new file mode 120000
index 0000000..0412a1f
--- /dev/null
+++ b/tests/md/003.out
@@ -0,0 +1 @@
+002.out
\ No newline at end of file
--
2.43.5
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH blktests 5/7] md/rc: test atomic writes for dm-linear
2025-09-12 9:57 [PATCH blktests 0/7] Further stacked device atomic writes testing John Garry
` (3 preceding siblings ...)
2025-09-12 9:57 ` [PATCH blktests 4/7] md/003: add NVMe atomic write tests for stacked devices John Garry
@ 2025-09-12 9:57 ` John Garry
2025-09-12 9:57 ` [PATCH blktests 6/7] md/rc: test atomic writes for dm-stripe John Garry
` (2 subsequent siblings)
7 siblings, 0 replies; 25+ messages in thread
From: John Garry @ 2025-09-12 9:57 UTC (permalink / raw)
To: linux-block, shinichiro.kawasaki; +Cc: John Garry
Introduce testing for dm-linear.
We need to use device mapper tools, like vgcreate and lvm.
dm-linear does not require any chunk size to be set, so only test
once.
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
tests/md/002.out | 13 +++++++++++++
tests/md/rc | 42 +++++++++++++++++++++++++++++++++++++++++-
2 files changed, 54 insertions(+), 1 deletion(-)
diff --git a/tests/md/002.out b/tests/md/002.out
index b311a50..5426cf6 100644
--- a/tests/md/002.out
+++ b/tests/md/002.out
@@ -116,4 +116,17 @@ TEST 9 raid10 step 4 - perform a pwritev2 with size of sysfs_atomic_unit_max_byt
TEST 10 raid10 step 4 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
pwrite: Invalid argument
TEST 11 raid10 step 4 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+TEST 1 dm-linear step 1 - Verify md sysfs atomic attributes matches - pass
+TEST 2 dm-linear step 1 - Verify sysfs atomic attributes - pass
+TEST 3 dm-linear step 1 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 dm-linear step 1 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 dm-linear step 1 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 dm-linear step 1 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 dm-linear step 1 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 dm-linear step 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 dm-linear step 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 dm-linear step 1 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 dm-linear step 1 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
Test complete
diff --git a/tests/md/rc b/tests/md/rc
index 105d283..a839a66 100644
--- a/tests/md/rc
+++ b/tests/md/rc
@@ -16,6 +16,8 @@ group_requires() {
_have_driver raid1
_have_driver raid10
_have_driver md-mod
+ _have_program vgcreate
+ _have_program lvm
}
declare -A MD_DEVICES
@@ -81,6 +83,11 @@ _md_atomics_boundaries_max() {
echo "0"
}
+_get_vgsize() {
+ vgsize=$(vgdisplay --units b blktests_vg00 | grep 'VG Size' | tr -d -c 0-9)
+ echo "$vgsize"
+}
+
_md_atomics_test() {
local md_atomic_unit_max
local md_atomic_unit_min
@@ -145,7 +152,7 @@ _md_atomics_test() {
let raw_atomic_write_boundary=0;
fi
- for personality in raid0 raid1 raid10; do
+ for personality in raid0 raid1 raid10 dm-linear; do
if [ "$personality" = raid0 ] || [ "$personality" = raid10 ]
then
step_limit=4
@@ -210,6 +217,29 @@ _md_atomics_test() {
md_dev=$(readlink /dev/md/blktests_md | sed 's|\.\./||')
fi
+ if [ "$personality" = dm-linear ]
+ then
+ pvremove --force /dev/"${dev0}" 2> /dev/null 1>&2
+ pvremove --force /dev/"${dev1}" 2> /dev/null 1>&2
+ pvremove --force /dev/"${dev2}" 2> /dev/null 1>&2
+ pvremove --force /dev/"${dev3}" 2> /dev/null 1>&2
+
+ pvcreate /dev/"${dev0}" 2> /dev/null 1>&2
+ pvcreate /dev/"${dev1}" 2> /dev/null 1>&2
+ pvcreate /dev/"${dev2}" 2> /dev/null 1>&2
+ pvcreate /dev/"${dev3}" 2> /dev/null 1>&2
+
+ echo y | vgcreate blktests_vg00 /dev/"${dev0}" /dev/"${dev1}" \
+ /dev/"${dev2}" /dev/"${dev3}" 2> /dev/null 1>&2
+ fi
+
+ if [ "$personality" = dm-linear ]
+ then
+ vgsize=$(_get_vgsize)
+ echo y | lvm lvcreate -v -n blktests_lv -L "${vgsize}"B blktests_vg00 2> /dev/null 1>&2
+ md_dev=$(readlink /dev/mapper/blktests_vg00-blktests_lv | sed 's|\.\./||')
+ fi
+
md_dev_sysfs="/sys/devices/virtual/block/${md_dev}"
sysfs_logical_block_size=$(< "${md_dev_sysfs}"/queue/logical_block_size)
@@ -380,6 +410,16 @@ _md_atomics_test() {
mdadm --zero-superblock /dev/"${dev2}" 2> /dev/null 1>&2
mdadm --zero-superblock /dev/"${dev3}" 2> /dev/null 1>&2
fi
+
+ if [ "$personality" = dm-linear ]
+ then
+ lvremove --force /dev/mapper/blktests_vg00-blktests_lv 2> /dev/null 1>&2
+ vgremove --force blktests_vg00 2> /dev/null 1>&2
+ pvremove --force /dev/"${dev0}" 2> /dev/null 1>&2
+ pvremove --force /dev/"${dev1}" 2> /dev/null 1>&2
+ pvremove --force /dev/"${dev2}" 2> /dev/null 1>&2
+ pvremove --force /dev/"${dev3}" 2> /dev/null 1>&2
+ fi
done
done
}
--
2.43.5
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH blktests 6/7] md/rc: test atomic writes for dm-stripe
2025-09-12 9:57 [PATCH blktests 0/7] Further stacked device atomic writes testing John Garry
` (4 preceding siblings ...)
2025-09-12 9:57 ` [PATCH blktests 5/7] md/rc: test atomic writes for dm-linear John Garry
@ 2025-09-12 9:57 ` John Garry
2025-09-12 9:57 ` [PATCH blktests 7/7] md/rc: test atomic writes for dm-mirror John Garry
2025-09-16 8:55 ` [PATCH blktests 0/7] Further stacked device atomic writes testing Shinichiro Kawasaki
7 siblings, 0 replies; 25+ messages in thread
From: John Garry @ 2025-09-12 9:57 UTC (permalink / raw)
To: linux-block, shinichiro.kawasaki; +Cc: John Garry
Ensure that the drives are at least 5MB. This is because we need to know
the size of the volume to create. For dm-linear, we could use vgsize.
However that doesn't work for dm-stripe, as we want the volume to cover
all disks; for that, we need to know the minimum size of each disk.
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
tests/md/002 | 1 +
tests/md/002.out | 52 ++++++++++++++++++++++++++++++++++++++++++++++++
tests/md/003 | 1 +
tests/md/rc | 28 ++++++++++++++++++++++----
4 files changed, 78 insertions(+), 4 deletions(-)
diff --git a/tests/md/002 b/tests/md/002
index 87b13f2..0470a1b 100755
--- a/tests/md/002
+++ b/tests/md/002
@@ -22,6 +22,7 @@ test() {
num_tgts=1
add_host=4
per_host_store=true
+ dev_size_mb=5
)
echo "Running md_atomics_test"
diff --git a/tests/md/002.out b/tests/md/002.out
index 5426cf6..cce1b1c 100644
--- a/tests/md/002.out
+++ b/tests/md/002.out
@@ -129,4 +129,56 @@ TEST 9 dm-linear step 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_
TEST 10 dm-linear step 1 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
pwrite: Invalid argument
TEST 11 dm-linear step 1 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+TEST 1 dm-stripe step 1 - Verify md sysfs atomic attributes matches - pass
+TEST 2 dm-stripe step 1 - Verify sysfs atomic attributes - pass
+TEST 3 dm-stripe step 1 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 dm-stripe step 1 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 dm-stripe step 1 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 dm-stripe step 1 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 dm-stripe step 1 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 dm-stripe step 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 dm-stripe step 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 dm-stripe step 1 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 dm-stripe step 1 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+TEST 1 dm-stripe step 2 - Verify md sysfs atomic attributes matches - pass
+TEST 2 dm-stripe step 2 - Verify sysfs atomic attributes - pass
+TEST 3 dm-stripe step 2 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 dm-stripe step 2 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 dm-stripe step 2 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 dm-stripe step 2 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 dm-stripe step 2 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 dm-stripe step 2 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 dm-stripe step 2 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 dm-stripe step 2 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 dm-stripe step 2 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+TEST 1 dm-stripe step 3 - Verify md sysfs atomic attributes matches - pass
+TEST 2 dm-stripe step 3 - Verify sysfs atomic attributes - pass
+TEST 3 dm-stripe step 3 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 dm-stripe step 3 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 dm-stripe step 3 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 dm-stripe step 3 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 dm-stripe step 3 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 dm-stripe step 3 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 dm-stripe step 3 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 dm-stripe step 3 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 dm-stripe step 3 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+TEST 1 dm-stripe step 4 - Verify md sysfs atomic attributes matches - pass
+TEST 2 dm-stripe step 4 - Verify sysfs atomic attributes - pass
+TEST 3 dm-stripe step 4 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 dm-stripe step 4 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 dm-stripe step 4 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 dm-stripe step 4 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 dm-stripe step 4 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 dm-stripe step 4 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 dm-stripe step 4 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 dm-stripe step 4 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 dm-stripe step 4 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
Test complete
diff --git a/tests/md/003 b/tests/md/003
index 8128f8d..3e97657 100755
--- a/tests/md/003
+++ b/tests/md/003
@@ -37,6 +37,7 @@ test() {
TEST_DEV_SYSFS="${NVME_TEST_DEVS_SYSFS[$i]}"
TEST_DEV="${NVME_TEST_DEVS[$i]}"
_require_device_support_atomic_writes
+ _require_test_dev_size 5m
done
if [[ $testdev_count -lt 4 ]]; then
diff --git a/tests/md/rc b/tests/md/rc
index a839a66..da04b4a 100644
--- a/tests/md/rc
+++ b/tests/md/rc
@@ -152,8 +152,9 @@ _md_atomics_test() {
let raw_atomic_write_boundary=0;
fi
- for personality in raid0 raid1 raid10 dm-linear; do
- if [ "$personality" = raid0 ] || [ "$personality" = raid10 ]
+ for personality in raid0 raid1 raid10 dm-linear dm-stripe; do
+ if [ "$personality" = raid0 ] || [ "$personality" = raid10 ] || \
+ [ "$personality" = dm-stripe ]
then
step_limit=4
else
@@ -217,7 +218,7 @@ _md_atomics_test() {
md_dev=$(readlink /dev/md/blktests_md | sed 's|\.\./||')
fi
- if [ "$personality" = dm-linear ]
+ if [ "$personality" = dm-linear ] || [ "$personality" = dm-stripe ]
then
pvremove --force /dev/"${dev0}" 2> /dev/null 1>&2
pvremove --force /dev/"${dev1}" 2> /dev/null 1>&2
@@ -233,6 +234,25 @@ _md_atomics_test() {
/dev/"${dev2}" /dev/"${dev3}" 2> /dev/null 1>&2
fi
+ if [ "$personality" = dm-stripe ]
+ then
+ atomics_boundaries_unit_max=$(_md_atomics_boundaries_max $raw_atomic_write_boundary $md_chunk_size "1")
+ atomics_boundaries_max=$(_md_atomics_boundaries_max $raw_atomic_write_boundary $md_chunk_size "0")
+
+ # The caller should ensure test device size, we ask for a total of 10M
+ # So each should be at least (10M + meta) / 4 in size, so 5 each should be enough
+ echo y | lvm lvcreate --stripes 4 --stripesize "${md_chunk_size_kb}" -L 10M \
+ -n blktests_lv blktests_vg00 2> /dev/null 1>&2
+ md_dev=$(readlink /dev/mapper/blktests_vg00-blktests_lv | sed 's|\.\./||')
+ expected_atomic_write_unit_min=$(_min $expected_atomic_write_unit_min $atomics_boundaries_unit_max)
+ expected_atomic_write_unit_max=$(_min $expected_atomic_write_unit_max $atomics_boundaries_unit_max)
+ expected_atomic_write_max=$(_min $expected_atomic_write_max $atomics_boundaries_max)
+ if [ "$atomics_boundaries_max" -eq 0 ]
+ then
+ expected_atomic_write_boundary=0
+ fi
+ fi
+
if [ "$personality" = dm-linear ]
then
vgsize=$(_get_vgsize)
@@ -411,7 +431,7 @@ _md_atomics_test() {
mdadm --zero-superblock /dev/"${dev3}" 2> /dev/null 1>&2
fi
- if [ "$personality" = dm-linear ]
+ if [ "$personality" = dm-linear ] || [ "$personality" = dm-stripe ]
then
lvremove --force /dev/mapper/blktests_vg00-blktests_lv 2> /dev/null 1>&2
vgremove --force blktests_vg00 2> /dev/null 1>&2
--
2.43.5
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH blktests 7/7] md/rc: test atomic writes for dm-mirror
2025-09-12 9:57 [PATCH blktests 0/7] Further stacked device atomic writes testing John Garry
` (5 preceding siblings ...)
2025-09-12 9:57 ` [PATCH blktests 6/7] md/rc: test atomic writes for dm-stripe John Garry
@ 2025-09-12 9:57 ` John Garry
2025-09-16 8:55 ` [PATCH blktests 0/7] Further stacked device atomic writes testing Shinichiro Kawasaki
7 siblings, 0 replies; 25+ messages in thread
From: John Garry @ 2025-09-12 9:57 UTC (permalink / raw)
To: linux-block, shinichiro.kawasaki; +Cc: John Garry
Raise the required device size to 16MB, which would be enough to create a
2M mirror array.
Signed-off-by: John Garry <john.g.garry@oracle.com>
---
tests/md/002 | 2 +-
tests/md/002.out | 13 +++++++++++++
tests/md/003 | 2 +-
tests/md/rc | 15 ++++++++++++---
4 files changed, 27 insertions(+), 5 deletions(-)
diff --git a/tests/md/002 b/tests/md/002
index 0470a1b..de3d908 100755
--- a/tests/md/002
+++ b/tests/md/002
@@ -22,7 +22,7 @@ test() {
num_tgts=1
add_host=4
per_host_store=true
- dev_size_mb=5
+ dev_size_mb=16
)
echo "Running md_atomics_test"
diff --git a/tests/md/002.out b/tests/md/002.out
index cce1b1c..c6628bf 100644
--- a/tests/md/002.out
+++ b/tests/md/002.out
@@ -181,4 +181,17 @@ TEST 9 dm-stripe step 4 - perform a pwritev2 with size of sysfs_atomic_unit_max_
TEST 10 dm-stripe step 4 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
pwrite: Invalid argument
TEST 11 dm-stripe step 4 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+TEST 1 dm-mirror step 1 - Verify md sysfs atomic attributes matches - pass
+TEST 2 dm-mirror step 1 - Verify sysfs atomic attributes - pass
+TEST 3 dm-mirror step 1 - Verify md sysfs_atomic_write_max is equal to expected_atomic_write_max - pass
+TEST 4 dm-mirror step 1 - Verify sysfs atomic_write_unit_max_bytes = expected_atomic_write_unit_max - pass
+TEST 5 dm-mirror step 1 - Verify sysfs atomic_write_unit_boundary_bytes = expected atomic_write_unit_boundary_bytes - pass
+TEST 6 dm-mirror step 1 - Verify statx stx_atomic_write_unit_min - pass
+TEST 7 dm-mirror step 1 - Verify statx stx_atomic_write_unit_max - pass
+TEST 8 dm-mirror step 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 9 dm-mirror step 1 - perform a pwritev2 with size of sysfs_atomic_unit_max_bytes + LBS bytes with RWF_ATOMIC flag - pwritev2 should not be succesful - pass
+TEST 10 dm-mirror step 1 - perform a pwritev2 with size of sysfs_atomic_unit_min_bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
+pwrite: Invalid argument
+TEST 11 dm-mirror step 1 - perform a pwritev2 with a size of sysfs_atomic_write_unit_max_bytes - LBS bytes with RWF_ATOMIC flag - pwritev2 should fail - pass
Test complete
diff --git a/tests/md/003 b/tests/md/003
index 3e97657..453669c 100755
--- a/tests/md/003
+++ b/tests/md/003
@@ -37,7 +37,7 @@ test() {
TEST_DEV_SYSFS="${NVME_TEST_DEVS_SYSFS[$i]}"
TEST_DEV="${NVME_TEST_DEVS[$i]}"
_require_device_support_atomic_writes
- _require_test_dev_size 5m
+ _require_test_dev_size 16m
done
if [[ $testdev_count -lt 4 ]]; then
diff --git a/tests/md/rc b/tests/md/rc
index da04b4a..677efbf 100644
--- a/tests/md/rc
+++ b/tests/md/rc
@@ -152,7 +152,7 @@ _md_atomics_test() {
let raw_atomic_write_boundary=0;
fi
- for personality in raid0 raid1 raid10 dm-linear dm-stripe; do
+ for personality in raid0 raid1 raid10 dm-linear dm-stripe dm-mirror; do
if [ "$personality" = raid0 ] || [ "$personality" = raid10 ] || \
[ "$personality" = dm-stripe ]
then
@@ -218,7 +218,8 @@ _md_atomics_test() {
md_dev=$(readlink /dev/md/blktests_md | sed 's|\.\./||')
fi
- if [ "$personality" = dm-linear ] || [ "$personality" = dm-stripe ]
+ if [ "$personality" = dm-linear ] || [ "$personality" = dm-stripe ] || \
+ [ "$personality" = dm-mirror ]
then
pvremove --force /dev/"${dev0}" 2> /dev/null 1>&2
pvremove --force /dev/"${dev1}" 2> /dev/null 1>&2
@@ -260,6 +261,13 @@ _md_atomics_test() {
md_dev=$(readlink /dev/mapper/blktests_vg00-blktests_lv | sed 's|\.\./||')
fi
+ if [ "$personality" = dm-mirror ]
+ then
+ echo y | lvm lvcreate --type mirror -m3 -L 2M -n blktests_lv blktests_vg00 2> /dev/null 1>&2
+
+ md_dev=$(readlink /dev/mapper/blktests_vg00-blktests_lv | sed 's|\.\./||')
+ fi
+
md_dev_sysfs="/sys/devices/virtual/block/${md_dev}"
sysfs_logical_block_size=$(< "${md_dev_sysfs}"/queue/logical_block_size)
@@ -431,7 +439,8 @@ _md_atomics_test() {
mdadm --zero-superblock /dev/"${dev3}" 2> /dev/null 1>&2
fi
- if [ "$personality" = dm-linear ] || [ "$personality" = dm-stripe ]
+ if [ "$personality" = dm-linear ] || [ "$personality" = dm-stripe ] || \
+ [ "$personality" = dm-mirror ]
then
lvremove --force /dev/mapper/blktests_vg00-blktests_lv 2> /dev/null 1>&2
vgremove --force blktests_vg00 2> /dev/null 1>&2
--
2.43.5
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 0/7] Further stacked device atomic writes testing
2025-09-12 9:57 [PATCH blktests 0/7] Further stacked device atomic writes testing John Garry
` (6 preceding siblings ...)
2025-09-12 9:57 ` [PATCH blktests 7/7] md/rc: test atomic writes for dm-mirror John Garry
@ 2025-09-16 8:55 ` Shinichiro Kawasaki
2025-09-16 10:20 ` John Garry
7 siblings, 1 reply; 25+ messages in thread
From: Shinichiro Kawasaki @ 2025-09-16 8:55 UTC (permalink / raw)
To: John Garry; +Cc: linux-block@vger.kernel.org
On Sep 12, 2025 / 09:57, John Garry wrote:
> The testing of atomic writes support for stacked devices is limited.
>
> We only test scsi_debug and for a limited sets of personalities.
>
> Extend to test NVMe and also extend to the following stacked device
> personalities:
> - dm-linear
> - dm-stripe
> - dm-mirror
>
> Also add more strict atomic writes limits testing.
>
> John Garry (7):
> common/rc: add _min()
> md/rc: add _md_atomics_test
> md/002: convert to use _md_atomics_test
> md/003: add NVMe atomic write tests for stacked devices
Hello John, thanks for this series. Overall, this series looks valuable for me
since it expands the test contents and target devices. Also it minimizes code
duplication, which is good.
Having said that, I noticed a challenge in the series, especially in the 4th
patch "md/003: add NVMe atomic write tests for stacked devices". This patch
introduces a new test case md/003 that uses four NVME devices. Actually, this is
the very first test case which runs test for multiple devices that users define
in the TEST_DEVS variable.
Currently, blktests expects that each test case,
a) implements test() which prepares test target device/s in it and test the
device/s, or,
b) implements test_device() which tests single TEST_DEV taken from
TEST_DEVS that the user prepared
The test case md/003 tests multiple devices. This is beyond the current blktests
assumption. md/003 implements test(), and it refers to TEST_DEVS. It looks
working, but it breaks the expectation above. I concern this will confuse users.
For example, when user defines 4 NVME devices in TEST_DEVS, md/003 is run only
once, but other test cases are run 4 times. It also will confuse blktests test
case developers, since it is not guided to refer to TEST_DEVS from test(): e.g.,
./new script. So I think a different approach is required to meet your goal.
I can think of two approaches. The first one is to follow the guide a) above.
Assuming nvme loop devices can be used for the atomic test, md/003 can prepare 4
nvme loop devices and use it for test. This meets the expectation. This also
will allow to run the test case where NVME devices are not available.
Q: Can we use nvme loop devices for the atomic test?
If nvme loop devices can not be used for the atomic test, or if you prefer to
run the test for the real NVME devices, I think it's the better to improve the
blktests framework to support using multiple devices for a single test case. I
think new variables and functions should be introduced to support it, to avoid
the confusions that I noted above. For example, the test case should implement
the test in test_device_array() instead of test(), and it should refer to
TEST_DEV_ARRAY that users define instead of TEST_DEVS.
Based on the second approach, I quickly prototyped the blktests change [1]. I
also modified md/003 to adapt to the change [2].
[1] https://github.com/kawasaki/blktests/commit/7db35a16d7410cae728da8d6b9b2483e33e9c99b
[2] https://github.com/kawasaki/blktests/commit/278e6c74deba68e3044abf0e6c3ec350c0bc4a40
Please let me know your thoughts on the two approaches.
P.S. I will have some more comments on the details of the series, but before
making those comments, I would like to clarify how to resolve the challenge
above.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 0/7] Further stacked device atomic writes testing
2025-09-16 8:55 ` [PATCH blktests 0/7] Further stacked device atomic writes testing Shinichiro Kawasaki
@ 2025-09-16 10:20 ` John Garry
2025-09-16 11:55 ` John Garry
0 siblings, 1 reply; 25+ messages in thread
From: John Garry @ 2025-09-16 10:20 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: linux-block@vger.kernel.org
On 16/09/2025 09:55, Shinichiro Kawasaki wrote:
> On Sep 12, 2025 / 09:57, John Garry wrote:
>> The testing of atomic writes support for stacked devices is limited.
>>
>> We only test scsi_debug and for a limited sets of personalities.
>>
>> Extend to test NVMe and also extend to the following stacked device
>> personalities:
>> - dm-linear
>> - dm-stripe
>> - dm-mirror
>>
>> Also add more strict atomic writes limits testing.
>>
>> John Garry (7):
>> common/rc: add _min()
>> md/rc: add _md_atomics_test
>> md/002: convert to use _md_atomics_test
>> md/003: add NVMe atomic write tests for stacked devices
>
> Hello John, thanks for this series. Overall, this series looks valuable for me
> since it expands the test contents and target devices. Also it minimizes code
> duplication, which is good.
thanks for checking
>
>
> Having said that, I noticed a challenge in the series, especially in the 4th
> patch "md/003: add NVMe atomic write tests for stacked devices". This patch
> introduces a new test case md/003 that uses four NVME devices. Actually, this is
> the very first test case which runs test for multiple devices that users define
> in the TEST_DEVS variable.
>
> Currently, blktests expects that each test case,
>
> a) implements test() which prepares test target device/s in it and test the
> device/s, or,
> b) implements test_device() which tests single TEST_DEV taken from
> TEST_DEVS that the user prepared
>
> The test case md/003 tests multiple devices. This is beyond the current blktests
> assumption. md/003 implements test(), and it refers to TEST_DEVS. It looks
> working, but it breaks the expectation above.
Sure, I do think that the current infrastructure cannot handle what I
want to do. I want to test multiple specified devices in tandem. md/002
does not have such an problem, as it creates the devices itself (and so
can specify test()).
> I concern this will confuse users.
understood
> For example, when user defines 4 NVME devices in TEST_DEVS, md/003 is run only
> once, but other test cases are run 4 times.
Yes
> It also will confuse blktests test
> case developers, since it is not guided to refer to TEST_DEVS from test(): e.g.,
> ./new script. So I think a different approach is required to meet your goal.
ok
JFYI, I had been using QEMU to test this with virtual NVMe devices. This
allows me to manually set the atomic properties of the devices for good
test coverage.
>
>
> I can think of two approaches. The first one is to follow the guide a) above.
> Assuming nvme loop devices can be used for the atomic test,
I am not sure if they are, as I don't think that any such device will
support atomics. I did already consider this.
> md/003 can prepare 4
> nvme loop devices and use it for test. This meets the expectation. This also
> will allow to run the test case where NVME devices are not available.
>
> Q: Can we use nvme loop devices for the atomic test?
As above, unfortunately I don't think so.
Indeed, the test really only tests NVMe device queue limits and not
really the atomic behaviour itself. If there was a way to configure the
atomics-related queue limits for nvmet, then it could work, but I don't
think that there are. Indeed, it does not really make sense for these to
be configured manually, as they are real HW device properties.
>
> If nvme loop devices can not be used for the atomic test, or if you prefer to
> run the test for the real NVME devices, I think it's the better to improve the
> blktests framework to support using multiple devices for a single test case. I
> think new variables and functions should be introduced to support it, to avoid
> the confusions that I noted above. For example, the test case should implement
> the test in test_device_array() instead of test(), and it should refer to
> TEST_DEV_ARRAY that users define instead of TEST_DEVS.
sounds reasonable
> > Based on the second approach, I quickly prototyped the blktests
change [1]. I
> also modified md/003 to adapt to the change [2].
>
> [1] https://urldefense.com/v3/__https://github.com/kawasaki/blktests/commit/7db35a16d7410cae728da8d6b9b2483e33e9c99b__;!!ACWV5N9M2RV99hQ!Lm9AlQ3T9qSGDEjCR0nGmjCGC_2wfuAbkP6zN9EfZD7L2Y7mgpKPah_fYZh6L_OPkH9IIxP4f9n1Q9dRRRJxbZQeqRtr$
> [2] https://urldefense.com/v3/__https://github.com/kawasaki/blktests/commit/278e6c74deba68e3044abf0e6c3ec350c0bc4a40__;!!ACWV5N9M2RV99hQ!Lm9AlQ3T9qSGDEjCR0nGmjCGC_2wfuAbkP6zN9EfZD7L2Y7mgpKPah_fYZh6L_OPkH9IIxP4f9n1Q9dRRRJxbSlcsw0E$
>
> Please let me know your thoughts on the two approaches.
Let me check it, thanks!
>
>
> P.S. I will have some more comments on the details of the series, but before
> making those comments, I would like to clarify how to resolve the challenge
> above.
ok, good.
Cheers,
John
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 0/7] Further stacked device atomic writes testing
2025-09-16 10:20 ` John Garry
@ 2025-09-16 11:55 ` John Garry
2025-09-16 12:23 ` Shinichiro Kawasaki
0 siblings, 1 reply; 25+ messages in thread
From: John Garry @ 2025-09-16 11:55 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: linux-block@vger.kernel.org
On 16/09/2025 11:20, John Garry wrote:
>> also modified md/003 to adapt to the change [2].
>>
>> [1] https://urldefense.com/v3/__https://github.com/kawasaki/blktests/
>> commit/7db35a16d7410cae728da8d6b9b2483e33e9c99b__;!!ACWV5N9M2RV99hQ!
>> Lm9AlQ3T9qSGDEjCR0nGmjCGC_2wfuAbkP6zN9EfZD7L2Y7mgpKPah_fYZh6L_OPkH9IIxP4f9n1Q9dRRRJxbZQeqRtr$
>> [2] https://urldefense.com/v3/__https://github.com/kawasaki/blktests/
>> commit/278e6c74deba68e3044abf0e6c3ec350c0bc4a40__;!!ACWV5N9M2RV99hQ!
>> Lm9AlQ3T9qSGDEjCR0nGmjCGC_2wfuAbkP6zN9EfZD7L2Y7mgpKPah_fYZh6L_OPkH9IIxP4f9n1Q9dRRRJxbSlcsw0E$
>>
>> Please let me know your thoughts on the two approaches.
>
> Let me check it, thanks!
I gave it a spin for 003 and it looks to work ok - thanks!
I further comment I have on my own code is about this snippet from 003:
for ((i = 0; i < ${#NVME_TEST_DEVS[@]}; ++i)); do
TEST_DEV_SYSFS="${NVME_TEST_DEVS_SYSFS[$i]}"
TEST_DEV="${NVME_TEST_DEVS[$i]}"
_require_device_support_atomic_writes
_require_test_dev_size 16m
done
Notice that I set TEST_DEV_SYSFS and TEST_DEV, as these are required for
the _require_device_support_atomic_writes and _require_test_dev_size
calls. I'm just trying to reuse helpers normally used for test_device().
Is this ok to do so? I'm not sure if it is a bit of a bodge...
John
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 0/7] Further stacked device atomic writes testing
2025-09-16 11:55 ` John Garry
@ 2025-09-16 12:23 ` Shinichiro Kawasaki
2025-09-16 12:27 ` John Garry
0 siblings, 1 reply; 25+ messages in thread
From: Shinichiro Kawasaki @ 2025-09-16 12:23 UTC (permalink / raw)
To: John Garry; +Cc: linux-block@vger.kernel.org
On Sep 16, 2025 / 12:55, John Garry wrote:
> On 16/09/2025 11:20, John Garry wrote:
> > > also modified md/003 to adapt to the change [2].
> > >
> > > [1]
> > > https://urldefense.com/v3/__https://github.com/kawasaki/blktests/
> > > commit/7db35a16d7410cae728da8d6b9b2483e33e9c99b__;!!ACWV5N9M2RV99hQ! Lm9AlQ3T9qSGDEjCR0nGmjCGC_2wfuAbkP6zN9EfZD7L2Y7mgpKPah_fYZh6L_OPkH9IIxP4f9n1Q9dRRRJxbZQeqRtr$
> > > [2]
> > > https://urldefense.com/v3/__https://github.com/kawasaki/blktests/
> > > commit/278e6c74deba68e3044abf0e6c3ec350c0bc4a40__;!!ACWV5N9M2RV99hQ! Lm9AlQ3T9qSGDEjCR0nGmjCGC_2wfuAbkP6zN9EfZD7L2Y7mgpKPah_fYZh6L_OPkH9IIxP4f9n1Q9dRRRJxbSlcsw0E$
> > >
> > > Please let me know your thoughts on the two approaches.
> >
> > Let me check it, thanks!
>
> I gave it a spin for 003 and it looks to work ok - thanks!
Sounds good! Then let's seek for this solution approach :)
Let me have a day or two to improve the patch [1]. I rethough and now I think
TEST_DEV_ARRAY values will be test case dependent. When we have another test
case to have multiple devices, the new test will require different set of
devices from md/002, probably. So TEST_DEV_ARRAY can be an associative array:
TEST_DEV_ARRAY[md/002]="/dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvvme3n1"
I need to do some trials to see if this idea is feasible.
>
> I further comment I have on my own code is about this snippet from 003:
>
> for ((i = 0; i < ${#NVME_TEST_DEVS[@]}; ++i)); do
> TEST_DEV_SYSFS="${NVME_TEST_DEVS_SYSFS[$i]}"
> TEST_DEV="${NVME_TEST_DEVS[$i]}"
> _require_device_support_atomic_writes
> _require_test_dev_size 16m
> done
>
> Notice that I set TEST_DEV_SYSFS and TEST_DEV, as these are required for the
> _require_device_support_atomic_writes and _require_test_dev_size calls. I'm
> just trying to reuse helpers normally used for test_device(). Is this ok to
> do so? I'm not sure if it is a bit of a bodge...
Not really, TEST_DEV_SYSFS and TEST_DEV are part of the interface between the
blktests framework and test cases. They should be set only by the blktests
framework, and test cases should not modify them. My prototype patch [1]
ensures that device_requires() is called for each element of the
TEST_DEV_ARRAY. Then, the code snippet you quoted can be replaced as follows:
diff --git a/tests/md/003 b/tests/md/003
index 94c1132..765c4cc 100755
--- a/tests/md/003
+++ b/tests/md/003
@@ -14,6 +14,11 @@ requires() {
_nvme_requires
}
+device_requires() {
+ _require_device_support_atomic_writes
+ _require_test_dev_size 16m
+}
+
test_device_array() {
local ns
local testdev_count=0
@@ -33,13 +38,6 @@ test_device_array() {
fi
done
- for ((i = 0; i < ${#NVME_TEST_DEVS[@]}; ++i)); do
- TEST_DEV_SYSFS="${NVME_TEST_DEVS_SYSFS[$i]}"
- TEST_DEV="${NVME_TEST_DEVS[$i]}"
- _require_device_support_atomic_writes
- _require_test_dev_size 16m
- done
-
if [[ $testdev_count -lt 4 ]]; then
SKIP_REASONS+=("requires at least 4 NVMe devices")
return 1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 0/7] Further stacked device atomic writes testing
2025-09-16 12:23 ` Shinichiro Kawasaki
@ 2025-09-16 12:27 ` John Garry
2025-09-17 12:02 ` Shinichiro Kawasaki
0 siblings, 1 reply; 25+ messages in thread
From: John Garry @ 2025-09-16 12:27 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: linux-block@vger.kernel.org
On 16/09/2025 13:23, Shinichiro Kawasaki wrote:
> On Sep 16, 2025 / 12:55, John Garry wrote:
>> On 16/09/2025 11:20, John Garry wrote:
>>>> also modified md/003 to adapt to the change [2].
>>>>
>>>> [1]
>>>> https://urldefense.com/v3/__https://github.com/kawasaki/blktests/
>>>> commit/7db35a16d7410cae728da8d6b9b2483e33e9c99b__;!!ACWV5N9M2RV99hQ! Lm9AlQ3T9qSGDEjCR0nGmjCGC_2wfuAbkP6zN9EfZD7L2Y7mgpKPah_fYZh6L_OPkH9IIxP4f9n1Q9dRRRJxbZQeqRtr$
>>>> [2]
>>>> https://urldefense.com/v3/__https://github.com/kawasaki/blktests/
>>>> commit/278e6c74deba68e3044abf0e6c3ec350c0bc4a40__;!!ACWV5N9M2RV99hQ! Lm9AlQ3T9qSGDEjCR0nGmjCGC_2wfuAbkP6zN9EfZD7L2Y7mgpKPah_fYZh6L_OPkH9IIxP4f9n1Q9dRRRJxbSlcsw0E$
>>>>
>>>> Please let me know your thoughts on the two approaches.
>>>
>>> Let me check it, thanks!
>>
>> I gave it a spin for 003 and it looks to work ok - thanks!
>
> Sounds good! Then let's seek for this solution approach :)
ok, great!
>
> Let me have a day or two to improve the patch [1]. I rethough and now I think
> TEST_DEV_ARRAY values will be test case dependent. When we have another test
> case to have multiple devices, the new test will require different set of
> devices from md/002, probably. So TEST_DEV_ARRAY can be an associative array:
>
> TEST_DEV_ARRAY[md/002]="/dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvvme3n1"
>
> I need to do some trials to see if this idea is feasible.
ok
>
>>
>> I further comment I have on my own code is about this snippet from 003:
>>
>> for ((i = 0; i < ${#NVME_TEST_DEVS[@]}; ++i)); do
>> TEST_DEV_SYSFS="${NVME_TEST_DEVS_SYSFS[$i]}"
>> TEST_DEV="${NVME_TEST_DEVS[$i]}"
>> _require_device_support_atomic_writes
>> _require_test_dev_size 16m
>> done
>>
>> Notice that I set TEST_DEV_SYSFS and TEST_DEV, as these are required for the
>> _require_device_support_atomic_writes and _require_test_dev_size calls. I'm
>> just trying to reuse helpers normally used for test_device(). Is this ok to
>> do so? I'm not sure if it is a bit of a bodge...
>
> Not really, TEST_DEV_SYSFS and TEST_DEV are part of the interface between the
> blktests framework and test cases. They should be set only by the blktests
> framework, and test cases should not modify them. My prototype patch [1]
> ensures that device_requires() is called for each element of the
> TEST_DEV_ARRAY. Then, the code snippet you quoted can be replaced as follows:
ok, good
>
> diff --git a/tests/md/003 b/tests/md/003
> index 94c1132..765c4cc 100755
> --- a/tests/md/003
> +++ b/tests/md/003
> @@ -14,6 +14,11 @@ requires() {
> _nvme_requires
> }
>
> +device_requires() {
> + _require_device_support_atomic_writes
Incidentally, I think that we can drop this, as it is worth testing
stacking of devices which don't support atomic writes (to unsure that
the stacked device also do not support atomic writes).
> + _require_test_dev_size 16m
> +}
> +
> test_device_array() {
> local ns
> local testdev_count=0
> @@ -33,13 +38,6 @@ test_device_array() {
> fi
> done
>
Thanks for the help!
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 0/7] Further stacked device atomic writes testing
2025-09-16 12:27 ` John Garry
@ 2025-09-17 12:02 ` Shinichiro Kawasaki
2025-09-17 13:12 ` John Garry
0 siblings, 1 reply; 25+ messages in thread
From: Shinichiro Kawasaki @ 2025-09-17 12:02 UTC (permalink / raw)
To: John Garry; +Cc: linux-block@vger.kernel.org
On Sep 16, 2025 / 13:27, John Garry wrote:
> On 16/09/2025 13:23, Shinichiro Kawasaki wrote:
> > On Sep 16, 2025 / 12:55, John Garry wrote:
> > > On 16/09/2025 11:20, John Garry wrote:
> > > > > also modified md/003 to adapt to the change [2].
> > > > >
> > > > > [1]
> > > > > https://urldefense.com/v3/__https://github.com/kawasaki/blktests/
> > > > > commit/7db35a16d7410cae728da8d6b9b2483e33e9c99b__;!!ACWV5N9M2RV99hQ! Lm9AlQ3T9qSGDEjCR0nGmjCGC_2wfuAbkP6zN9EfZD7L2Y7mgpKPah_fYZh6L_OPkH9IIxP4f9n1Q9dRRRJxbZQeqRtr$
> > > > > [2]
> > > > > https://urldefense.com/v3/__https://github.com/kawasaki/blktests/
> > > > > commit/278e6c74deba68e3044abf0e6c3ec350c0bc4a40__;!!ACWV5N9M2RV99hQ! Lm9AlQ3T9qSGDEjCR0nGmjCGC_2wfuAbkP6zN9EfZD7L2Y7mgpKPah_fYZh6L_OPkH9IIxP4f9n1Q9dRRRJxbSlcsw0E$
> > > > >
> > > > > Please let me know your thoughts on the two approaches.
> > > >
> > > > Let me check it, thanks!
> > >
> > > I gave it a spin for 003 and it looks to work ok - thanks!
> >
> > Sounds good! Then let's seek for this solution approach :)
>
> ok, great!
>
> >
> > Let me have a day or two to improve the patch [1]. I rethough and now I think
> > TEST_DEV_ARRAY values will be test case dependent. When we have another test
> > case to have multiple devices, the new test will require different set of
> > devices from md/002, probably. So TEST_DEV_ARRAY can be an associative array:
> >
> > TEST_DEV_ARRAY[md/002]="/dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvvme3n1"
> >
> > I need to do some trials to see if this idea is feasible.
>
> ok
FYI, I implemented the idea above, and it looks working good. I created a
blktests patch series and posited [3]. Let's see how the review process will go.
[3] https://lore.kernel.org/linux-block/20250917114920.142996-1-shinichiro.kawasaki@wdc.com/
The series introduced a bit different config file variable TEST_CASE_DEV_ARRAY.
If you give it a try, please define it like,
TEST_CASE_DEV_ARRAY[md/003]="/dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1"
It also has slightly different variables for use in the test_device_array()
function: TEST_DEV_ARRAY and TEST_DEV_ARRAY_SYSFS_DIRS. As an example, I made a
quick commit on top of your patches [4].
[4] https://github.com/kawasaki/blktests/commit/fae0b3b617a19dab60610f50361bb0da6e0543ea
I will review details of your patches tomorrow.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 0/7] Further stacked device atomic writes testing
2025-09-17 12:02 ` Shinichiro Kawasaki
@ 2025-09-17 13:12 ` John Garry
2025-09-17 16:22 ` John Garry
0 siblings, 1 reply; 25+ messages in thread
From: John Garry @ 2025-09-17 13:12 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: linux-block@vger.kernel.org
On 17/09/2025 13:02, Shinichiro Kawasaki wrote:
>>> TEST_DEV_ARRAY[md/002]="/dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvvme3n1"
>>>
>>> I need to do some trials to see if this idea is feasible.
>> ok
> FYI, I implemented the idea above, and it looks working good. I created a
> blktests patch series and posited [3]. Let's see how the review process will go.
>
> [3]https://urldefense.com/v3/__https://lore.kernel.org/linux-
> block/20250917114920.142996-1-shinichiro.kawasaki@wdc.com/__;!!
> ACWV5N9M2RV99hQ!NNGuj9SVoLIwKksQudWC5ktgS6vIXTX1dGSmibli2-
> httSpUBfSHAIL1i2z-aCmYSXUZxmwGZswO2KJ6Ei8gwkK3safu$
>
> The series introduced a bit different config file variable TEST_CASE_DEV_ARRAY.
> If you give it a try, please define it like,
>
> TEST_CASE_DEV_ARRAY[md/003]="/dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1"
ok, understood
>
> It also has slightly different variables for use in the test_device_array()
> function: TEST_DEV_ARRAY and TEST_DEV_ARRAY_SYSFS_DIRS. As an example, I made a
> quick commit on top of your patches [4].
>
> [4]https://urldefense.com/v3/__https://github.com/kawasaki/blktests/commit/
> fae0b3b617a19dab60610f50361bb0da6e0543ea__;!!ACWV5N9M2RV99hQ!
> NNGuj9SVoLIwKksQudWC5ktgS6vIXTX1dGSmibli2-httSpUBfSHAIL1i2z-
> aCmYSXUZxmwGZswO2KJ6Ei8gwmoYAPTl$
>
> I will review details of your patches tomorrow.
great, thanks.
I'll test md/002 and md/003 today with all these changes.
John
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 0/7] Further stacked device atomic writes testing
2025-09-17 13:12 ` John Garry
@ 2025-09-17 16:22 ` John Garry
2025-09-18 4:36 ` Shinichiro Kawasaki
0 siblings, 1 reply; 25+ messages in thread
From: John Garry @ 2025-09-17 16:22 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: linux-block@vger.kernel.org
On 17/09/2025 14:12, John Garry wrote:
>> It also has slightly different variables for use in the
>> test_device_array()
>> function: TEST_DEV_ARRAY and TEST_DEV_ARRAY_SYSFS_DIRS. As an example,
>> I made a
>> quick commit on top of your patches [4].
>>
>> [4]https://urldefense.com/v3/__https://github.com/kawasaki/blktests/
>> commit/ fae0b3b617a19dab60610f50361bb0da6e0543ea__;!!ACWV5N9M2RV99hQ!
>> NNGuj9SVoLIwKksQudWC5ktgS6vIXTX1dGSmibli2-httSpUBfSHAIL1i2z-
>> aCmYSXUZxmwGZswO2KJ6Ei8gwmoYAPTl$
>> I will review details of your patches tomorrow.
>
> great, thanks.
>
> I'll test md/002 and md/003 today with all these changes.
I gave it a quick spin and it looks to work ok.
About TEST_CASE_DEV_ARRAY, is it scalable to index this per test case?
Thanks,
John
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 1/7] common/rc: add _min()
2025-09-12 9:57 ` [PATCH blktests 1/7] common/rc: add _min() John Garry
@ 2025-09-18 4:08 ` Shinichiro Kawasaki
2025-09-18 7:33 ` John Garry
0 siblings, 1 reply; 25+ messages in thread
From: Shinichiro Kawasaki @ 2025-09-18 4:08 UTC (permalink / raw)
To: John Garry; +Cc: linux-block@vger.kernel.org
On Sep 12, 2025 / 09:57, John Garry wrote:
> Add a helper to find the minimum of two numbers.
>
> A similar helper is being added in xfstests:
> https://lore.kernel.org/linux-xfs/cover.1755849134.git.ojaswin@linux.ibm.com/T/#m962683d8115979e57342d2644660230ee978c803
>
> Signed-off-by: John Garry <john.g.garry@oracle.com>
> ---
> common/rc | 11 +++++++++++
> 1 file changed, 11 insertions(+)
>
> diff --git a/common/rc b/common/rc
> index 946dee1..77a0f45 100644
> --- a/common/rc
> +++ b/common/rc
> @@ -700,3 +700,14 @@ _real_dev()
> fi
> echo "$dev"
> }
> +
> +_min() {
> + local ret
> +
> + for arg in "$@"; do
> + if [ -z "$ret" ] || (( $arg < $ret )); then
The line above and,
> + ret="$arg"
> + fi
> + done
> + echo $ret
this line above caused shellcheck warnings below.
$ make check
shellcheck -x -e SC2119 -f gcc check common/* \
tests/*/rc tests/*/[0-9]*[0-9] src/*.sh
common/rc:708:26: note: $/${} is unnecessary on arithmetic variables. [SC2004]
common/rc:708:33: note: $/${} is unnecessary on arithmetic variables. [SC2004]
common/rc:712:7: note: Double quote to prevent globbing and word splitting. [SC2086]
tests/md/rc:28:12: note: $/${} is unnecessary on arithmetic variables. [SC2004]
tests/md/rc:28:21: note: $/${} is unnecessary on arithmetic variables. [SC2004]
...
Other patches caused other shellcheck warnings. Could you run "make check" and
address them?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 2/7] md/rc: add _md_atomics_test
2025-09-12 9:57 ` [PATCH blktests 2/7] md/rc: add _md_atomics_test John Garry
@ 2025-09-18 4:17 ` Shinichiro Kawasaki
2025-09-18 7:36 ` John Garry
0 siblings, 1 reply; 25+ messages in thread
From: Shinichiro Kawasaki @ 2025-09-18 4:17 UTC (permalink / raw)
To: John Garry; +Cc: linux-block@vger.kernel.org
On Sep 12, 2025 / 09:57, John Garry wrote:
> The stacked device atomic writes testing is currently limited.
>
> md/002 currently only tests scsi_debug. SCSI does not support atomic
> boundaries, so it would be nice to test NVMe (which does support them).
>
> Furthermore, the testing in md/002 for chunk boundaries is very limited,
> in that we test once one boundary value. Indeed, for RAID0 and RAID10, a
> boundary should always be set for testing.
>
> Finally, md/002 only tests md RAID0/1/10. In future we will also want to
> test the following stacked device personalities which support atomic
> writes:
> - md-linear (being upstreamed)
> - dm-linear
> - dm-stripe
> - dm-mirror
>
> To solve all those problems, add a generic test handler,
> _md_atomics_test(). This can be extended for more extensive testing.
>
> This test handler will accept a group of devices and test as follows:
> a. calculate expected atomic write limits based on device limits
> b. Take results from a., and refine expected limits based on any chunk
> size
> c. loop through creating a stacked device for different chunk size. We loop
> once for any personality which does not have a chunk size, e.g. RAID1
> d. test sysfs and statx limits vs what is calculated in a. and b.
> e. test RWF_ATOMIC is accepted or rejected as expected
>
> Steps c, d, and e are really same as md/002.
>
> Signed-off-by: John Garry <john.g.garry@oracle.com>
> ---
> tests/md/rc | 372 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 372 insertions(+)
>
> diff --git a/tests/md/rc b/tests/md/rc
> index 96bcd97..105d283 100644
> --- a/tests/md/rc
> +++ b/tests/md/rc
> @@ -5,9 +5,381 @@
> # Tests for md raid
>
> . common/rc
> +. common/xfs
>
> group_requires() {
> + _have_kver 6 14 0
> _have_root
> _have_program mdadm
> + _have_xfs_io_atomic_write
I don't think either "_have_kver 6 14 0" or "_have_xfs_io_atomic_write" is
required for md/001. I suggest to introduce a new helper,
_stacked_atomic_test_requires() {
_have_kver 6 14 0
_have_xfs_io_atomic_write
}
and call it from requires() of md/002 and md/003.
> + _have_driver raid0
> + _have_driver raid1
> + _have_driver raid10
> _have_driver md-mod
> }
> +
> +declare -A MD_DEVICES
> +
> +_max_pow_of_two_factor() {
> + part1=$1
> + part2=-$1
> + retval=$(($part1 & $part2))
Nit: "local" declarations are missing for part1, part2 and retval.
Same comment for some other local variables introduced by this patch.
> + echo "$retval"
> +}
> +
> +# Find max atomic size given a boundary and chunk size
> +# @unit is set if we want atomic write "unit" size, i.e power-of-2
> +# @chunk must be > 0
> +_md_atomics_boundaries_max() {
> + boundary=$1
> + chunk=$2
> + unit=$3
> +
> + if [ "$boundary" -eq 0 ]
> + then
> + if [ "$unit" -eq 1 ]
> + then
> + retval=$(_max_pow_of_two_factor $chunk)
> + echo "$retval"
> + return 1
> + fi
> +
> + echo "$chunk"
> + return 1
When bash functions return non-zero value, it implies the functions failed.
When this function returns 1 at the line above, does it indicate failure?
It looks echoing back a good number, so I guess just "return" is more
appropriate. Same comment for other "return 1" in this function.
> + fi
> +
> + # boundary is always a power-of-2
> + if [ "$boundary" -eq "$chunk" ]
> + then
> + echo "$boundary"
> + return 1
> + fi
> +
> + if [ "$boundary" -gt "$chunk" ]
> + then
> + if (( $boundary % $chunk == 0))
> + then
> + if [ "$unit" -eq 1 ]
> + then
> + retval=$(_max_pow_of_two_factor $chunk)
> + echo "$retval"
> + return 1
> + fi
> + echo "$chunk"
> + return 1
> + fi
> + echo "0"
> + return 1
> + fi
> +
> + if (( $chunk % $boundary == 0))
> + then
> + echo "$boundary"
> + return 1
> + fi
> +
> + echo "0"
> +}
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 4/7] md/003: add NVMe atomic write tests for stacked devices
2025-09-12 9:57 ` [PATCH blktests 4/7] md/003: add NVMe atomic write tests for stacked devices John Garry
@ 2025-09-18 4:27 ` Shinichiro Kawasaki
2025-09-18 7:44 ` John Garry
0 siblings, 1 reply; 25+ messages in thread
From: Shinichiro Kawasaki @ 2025-09-18 4:27 UTC (permalink / raw)
To: John Garry; +Cc: linux-block@vger.kernel.org
On Sep 12, 2025 / 09:57, John Garry wrote:
> md/002 only tests SCSI via scsi_debug.
>
> It is also useful to test NVMe, so add a specific test for that.
>
> The results for 002 and 003 should be the same, so link them.
>
> _md_atomics_test requires 4x devices with atomics support, so check for
> that.
>
> Signed-off-by: John Garry <john.g.garry@oracle.com>
[...]
> diff --git a/tests/md/003 b/tests/md/003
> new file mode 100755
> index 0000000..8128f8d
> --- /dev/null
> +++ b/tests/md/003
> @@ -0,0 +1,51 @@
> +#!/bin/bash
> +# SPDX-License-Identifier: GPL-3.0+
> +# Copyright (C) 2025 Oracle and/or its affiliates
> +#
> +# Test NMVe Atomic Writes with MD devices
> +
> +. tests/nvme/rc
It is not recommended to introduce dependencies across tests/* groups. If you
need some nvme related helper functions, they should be placed not in
tests/nvme/rc but in common/nvme.
IIUC, tests/nvme/rc is required to call _nvme_requries() in requries(), but I
think _nvme_requires() is too much for this test case. I gueess it is enough to
call _require_test_dev_is_nvme() from device_requires() in md/003. To do that, I
suggest to add another preparation patch which moves _require_test_dev_is_nvme()
from tests/nvme/rc to common/nvme. (This comment assumes the test_device_array()
support series.)
> +. common/xfs
> +
> +DESCRIPTION="test md atomic writes for NVMe drives"
> +QUICK=1
> +
> +requires() {
> + _nvme_requires
> +}
> +
> +test() {
> + local ns
> + local testdev_count=0
> + declare -A NVME_TEST_DEVS
> + declare -A NVME_TEST_DEVS_NAME
> + declare -A NVME_TEST_DEVS_SYSFS
> +
> + echo "Running md_atomics_test"
> +
> + for i in "${!TEST_DEV_SYSFS_DIRS[@]}"; do
> + TEST_DEV=${TEST_DEV_SYSFS_DIRS[$i]}
> + if readlink -f "$TEST_DEV" | grep -q nvme; then
If _require_test_dev_is_nvme() is called from device_requires(), the check
above will not be required.
> + NVME_TEST_DEVS["$testdev_count"]="$i";
> + NVME_TEST_DEVS_SYSFS["$testdev_count"]="$TEST_DEV";
> + NVME_TEST_DEVS_NAME["$testdev_count"]="$(awk '{print substr($1,6) }' <<< $i)"
> + let testdev_count=testdev_count+1;
> + fi
> + done
> +
> + for ((i = 0; i < ${#NVME_TEST_DEVS[@]}; ++i)); do
> + TEST_DEV_SYSFS="${NVME_TEST_DEVS_SYSFS[$i]}"
> + TEST_DEV="${NVME_TEST_DEVS[$i]}"
> + _require_device_support_atomic_writes
> + done
> +
> + if [[ $testdev_count -lt 4 ]]; then
> + SKIP_REASONS+=("requires at least 4 NVMe devices")
> + return 1
> + fi
> +
> + _md_atomics_test "${NVME_TEST_DEVS_NAME[0]}" "${NVME_TEST_DEVS_NAME[1]}" \
> + "${NVME_TEST_DEVS_NAME[2]}" "${NVME_TEST_DEVS_NAME[3]}"
> +
> + echo "Test complete"
> +}
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 0/7] Further stacked device atomic writes testing
2025-09-17 16:22 ` John Garry
@ 2025-09-18 4:36 ` Shinichiro Kawasaki
2025-09-18 7:48 ` John Garry
0 siblings, 1 reply; 25+ messages in thread
From: Shinichiro Kawasaki @ 2025-09-18 4:36 UTC (permalink / raw)
To: John Garry; +Cc: linux-block@vger.kernel.org
On Sep 17, 2025 / 17:22, John Garry wrote:
> On 17/09/2025 14:12, John Garry wrote:
> > > It also has slightly different variables for use in the
> > > test_device_array()
> > > function: TEST_DEV_ARRAY and TEST_DEV_ARRAY_SYSFS_DIRS. As an
> > > example, I made a
> > > quick commit on top of your patches [4].
> > >
> > > [4]https://urldefense.com/v3/__https://github.com/kawasaki/blktests/
> > > commit/
> > > fae0b3b617a19dab60610f50361bb0da6e0543ea__;!!ACWV5N9M2RV99hQ!
> > > NNGuj9SVoLIwKksQudWC5ktgS6vIXTX1dGSmibli2-httSpUBfSHAIL1i2z-
> > > aCmYSXUZxmwGZswO2KJ6Ei8gwmoYAPTl$
> > > I will review details of your patches tomorrow.
> >
> > great, thanks.
> >
> > I'll test md/002 and md/003 today with all these changes.
>
> I gave it a quick spin and it looks to work ok.
Good to hear, thanks.
>
> About TEST_CASE_DEV_ARRAY, is it scalable to index this per test case?
I'm not exactly sure what you mean with the word "scalable", but I guess
you worry about many config lines of TEST_CASE_DEV_ARRAY[X]=Y for many test
cases with test_device_array(). The series allows the keys X of
TEST_CASE_DEV_ARRAY can be regular expressions, I think many of the config
lines can be combined into single line when they use the same devices Y.
BTW, I made the detailed comments on your patches. Other than the comments
and the adaptations to the test_device_array(), the series looks good to me.
Thanks!
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 1/7] common/rc: add _min()
2025-09-18 4:08 ` Shinichiro Kawasaki
@ 2025-09-18 7:33 ` John Garry
0 siblings, 0 replies; 25+ messages in thread
From: John Garry @ 2025-09-18 7:33 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: linux-block@vger.kernel.org
On 18/09/2025 05:08, Shinichiro Kawasaki wrote:
> On Sep 12, 2025 / 09:57, John Garry wrote:
>> Add a helper to find the minimum of two numbers.
>>
>> A similar helper is being added in xfstests:
>> https://lore.kernel.org/linux-xfs/cover.1755849134.git.ojaswin@linux.ibm.com/T/#m962683d8115979e57342d2644660230ee978c803
>>
>> Signed-off-by: John Garry <john.g.garry@oracle.com>
>> ---
>> common/rc | 11 +++++++++++
>> 1 file changed, 11 insertions(+)
>>
>> diff --git a/common/rc b/common/rc
>> index 946dee1..77a0f45 100644
>> --- a/common/rc
>> +++ b/common/rc
>> @@ -700,3 +700,14 @@ _real_dev()
>> fi
>> echo "$dev"
>> }
>> +
>> +_min() {
>> + local ret
>> +
>> + for arg in "$@"; do
>> + if [ -z "$ret" ] || (( $arg < $ret )); then
>
> The line above and,
>
>> + ret="$arg"
>> + fi
>> + done
>> + echo $ret
>
> this line above caused shellcheck warnings below.
>
> $ make check
> shellcheck -x -e SC2119 -f gcc check common/* \
> tests/*/rc tests/*/[0-9]*[0-9] src/*.sh
> common/rc:708:26: note: $/${} is unnecessary on arithmetic variables. [SC2004]
> common/rc:708:33: note: $/${} is unnecessary on arithmetic variables. [SC2004]
> common/rc:712:7: note: Double quote to prevent globbing and word splitting. [SC2086]
> tests/md/rc:28:12: note: $/${} is unnecessary on arithmetic variables. [SC2004]
> tests/md/rc:28:21: note: $/${} is unnecessary on arithmetic variables. [SC2004]
> ...
>
> Other patches caused other shellcheck warnings. Could you run "make check" and
> address them?
sure, will fix
thanks
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 2/7] md/rc: add _md_atomics_test
2025-09-18 4:17 ` Shinichiro Kawasaki
@ 2025-09-18 7:36 ` John Garry
0 siblings, 0 replies; 25+ messages in thread
From: John Garry @ 2025-09-18 7:36 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: linux-block@vger.kernel.org
On 18/09/2025 05:17, Shinichiro Kawasaki wrote:
> On Sep 12, 2025 / 09:57, John Garry wrote:
>> The stacked device atomic writes testing is currently limited.
>>
>> md/002 currently only tests scsi_debug. SCSI does not support atomic
>> boundaries, so it would be nice to test NVMe (which does support them).
>>
>> Furthermore, the testing in md/002 for chunk boundaries is very limited,
>> in that we test once one boundary value. Indeed, for RAID0 and RAID10, a
>> boundary should always be set for testing.
>>
>> Finally, md/002 only tests md RAID0/1/10. In future we will also want to
>> test the following stacked device personalities which support atomic
>> writes:
>> - md-linear (being upstreamed)
>> - dm-linear
>> - dm-stripe
>> - dm-mirror
>>
>> To solve all those problems, add a generic test handler,
>> _md_atomics_test(). This can be extended for more extensive testing.
>>
>> This test handler will accept a group of devices and test as follows:
>> a. calculate expected atomic write limits based on device limits
>> b. Take results from a., and refine expected limits based on any chunk
>> size
>> c. loop through creating a stacked device for different chunk size. We loop
>> once for any personality which does not have a chunk size, e.g. RAID1
>> d. test sysfs and statx limits vs what is calculated in a. and b.
>> e. test RWF_ATOMIC is accepted or rejected as expected
>>
>> Steps c, d, and e are really same as md/002.
>>
>> Signed-off-by: John Garry <john.g.garry@oracle.com>
>> ---
>> tests/md/rc | 372 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 372 insertions(+)
>>
>> diff --git a/tests/md/rc b/tests/md/rc
>> index 96bcd97..105d283 100644
>> --- a/tests/md/rc
>> +++ b/tests/md/rc
>> @@ -5,9 +5,381 @@
>> # Tests for md raid
>>
>> . common/rc
>> +. common/xfs
>>
>> group_requires() {
>> + _have_kver 6 14 0
>> _have_root
>> _have_program mdadm
>> + _have_xfs_io_atomic_write
>
> I don't think either "_have_kver 6 14 0" or "_have_xfs_io_atomic_write" is
> required for md/001. I suggest to introduce a new helper,
>
> _stacked_atomic_test_requires() {
> _have_kver 6 14 0
> _have_xfs_io_atomic_write
> }
>
> and call it from requires() of md/002 and md/003.
ok, fine
>
>> + _have_driver raid0
>> + _have_driver raid1
>> + _have_driver raid10
>> _have_driver md-mod
>> }
>> +
>> +declare -A MD_DEVICES
>> +
>> +_max_pow_of_two_factor() {
>> + part1=$1
>> + part2=-$1
>> + retval=$(($part1 & $part2))
>
> Nit: "local" declarations are missing for part1, part2 and retval.
> Same comment for some other local variables introduced by this patch.
>
ok, will fix
>> + echo "$retval"
>> +}
>> +
>> +# Find max atomic size given a boundary and chunk size
>> +# @unit is set if we want atomic write "unit" size, i.e power-of-2
>> +# @chunk must be > 0
>> +_md_atomics_boundaries_max() {
>> + boundary=$1
>> + chunk=$2
>> + unit=$3
>> +
>> + if [ "$boundary" -eq 0 ]
>> + then
>> + if [ "$unit" -eq 1 ]
>> + then
>> + retval=$(_max_pow_of_two_factor $chunk)
>> + echo "$retval"
>> + return 1
>> + fi
>> +
>> + echo "$chunk"
>> + return 1
>
> When bash functions return non-zero value, it implies the functions failed.
> When this function returns 1 at the line above, does it indicate failure?
> It looks echoing back a good number, so I guess just "return" is more
> appropriate. Same comment for other "return 1" in this function.
this function should not fail, so I will just use "return"
>
>> + fi
>> +
>> + # boundary is always a power-of-2
>> + if [ "$boundary" -eq "$chunk" ]
>> + then
>> + echo "$boundary"
>> + return 1
>> + fi
>> +
>> + if [ "$boundary" -gt "$chunk" ]
>> + then
>> + if (( $boundary % $chunk == 0))
>> + then
>> + if [ "$unit" -eq 1 ]
>> + then
>> + retval=$(_max_pow_of_two_factor $chunk)
>> + echo "$retval"
>> + return 1
>> + fi
>> + echo "$chunk"
>> + return 1
>> + fi
>> + echo "0"
>> + return 1
>> + fi
>> +
>> + if (( $chunk % $boundary == 0))
>> + then
>> + echo "$boundary"
>> + return 1
>> + fi
>> +
>> + echo "0"
>> +}
Thanks,
John
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 4/7] md/003: add NVMe atomic write tests for stacked devices
2025-09-18 4:27 ` Shinichiro Kawasaki
@ 2025-09-18 7:44 ` John Garry
0 siblings, 0 replies; 25+ messages in thread
From: John Garry @ 2025-09-18 7:44 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: linux-block@vger.kernel.org
On 18/09/2025 05:27, Shinichiro Kawasaki wrote:
> On Sep 12, 2025 / 09:57, John Garry wrote:
>> md/002 only tests SCSI via scsi_debug.
>>
>> It is also useful to test NVMe, so add a specific test for that.
>>
>> The results for 002 and 003 should be the same, so link them.
>>
>> _md_atomics_test requires 4x devices with atomics support, so check for
>> that.
>>
>> Signed-off-by: John Garry <john.g.garry@oracle.com>
> [...]
>> diff --git a/tests/md/003 b/tests/md/003
>> new file mode 100755
>> index 0000000..8128f8d
>> --- /dev/null
>> +++ b/tests/md/003
>> @@ -0,0 +1,51 @@
>> +#!/bin/bash
>> +# SPDX-License-Identifier: GPL-3.0+
>> +# Copyright (C) 2025 Oracle and/or its affiliates
>> +#
>> +# Test NMVe Atomic Writes with MD devices
>> +
>> +. tests/nvme/rc
>
> It is not recommended to introduce dependencies across tests/* groups. If you
> need some nvme related helper functions, they should be placed not in
> tests/nvme/rc but in common/nvme.
>
> IIUC, tests/nvme/rc is required to call _nvme_requries() in requries(), but I
> think _nvme_requires() is too much for this test case.
I thought that we would need _nvme_requries() for ensuring that we have
the appropriate driver, i.e. the nvme core driver
> I gueess it is enough to
> call _require_test_dev_is_nvme() from device_requires() in md/003.
> To do that, I
> suggest to add another preparation patch which moves _require_test_dev_is_nvme()
> from tests/nvme/rc to common/nvme. (This comment assumes the test_device_array()
> support series.)
ok
>
>> +. common/xfs
>> +
>> +DESCRIPTION="test md atomic writes for NVMe drives"
>> +QUICK=1
>> +
>> +requires() {
>> + _nvme_requires
>> +}
>> +
>> +test() {
>> + local ns
>> + local testdev_count=0
>> + declare -A NVME_TEST_DEVS
>> + declare -A NVME_TEST_DEVS_NAME
>> + declare -A NVME_TEST_DEVS_SYSFS
>> +
>> + echo "Running md_atomics_test"
>> +
>> + for i in "${!TEST_DEV_SYSFS_DIRS[@]}"; do
>> + TEST_DEV=${TEST_DEV_SYSFS_DIRS[$i]}
>> + if readlink -f "$TEST_DEV" | grep -q nvme; then
>
> If _require_test_dev_is_nvme() is called from device_requires(), the check
> above will not be required.
ok, sure
thanks,
John
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 0/7] Further stacked device atomic writes testing
2025-09-18 4:36 ` Shinichiro Kawasaki
@ 2025-09-18 7:48 ` John Garry
2025-09-18 10:37 ` Shinichiro Kawasaki
0 siblings, 1 reply; 25+ messages in thread
From: John Garry @ 2025-09-18 7:48 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: linux-block@vger.kernel.org
On 18/09/2025 05:36, Shinichiro Kawasaki wrote:
> On Sep 17, 2025 / 17:22, John Garry wrote:
>> On 17/09/2025 14:12, John Garry wrote:
>>>> It also has slightly different variables for use in the
>>>> test_device_array()
>>>> function: TEST_DEV_ARRAY and TEST_DEV_ARRAY_SYSFS_DIRS. As an
>>>> example, I made a
>>>> quick commit on top of your patches [4].
>>>>
>>>> [4]https://urldefense.com/v3/__https://github.com/kawasaki/blktests/
>>>> commit/
>>>> fae0b3b617a19dab60610f50361bb0da6e0543ea__;!!ACWV5N9M2RV99hQ!
>>>> NNGuj9SVoLIwKksQudWC5ktgS6vIXTX1dGSmibli2-httSpUBfSHAIL1i2z-
>>>> aCmYSXUZxmwGZswO2KJ6Ei8gwmoYAPTl$
>>>> I will review details of your patches tomorrow.
>>>
>>> great, thanks.
>>>
>>> I'll test md/002 and md/003 today with all these changes.
>>
>> I gave it a quick spin and it looks to work ok.
>
> Good to hear, thanks.
>
>>
>> About TEST_CASE_DEV_ARRAY, is it scalable to index this per test case?
>
> I'm not exactly sure what you mean with the word "scalable", but I guess
> you worry about many config lines of TEST_CASE_DEV_ARRAY[X]=Y for many test
> cases with test_device_array().
Yes
> The series allows the keys X of
> TEST_CASE_DEV_ARRAY can be regular expressions, I think many of the config
> lines can be combined into single line when they use the same devices Y.
How would that look then in the config file? More examples could help me
see your idea.
>
> BTW, I made the detailed comments on your patches. Other than the comments
> and the adaptations to the test_device_array(), the series looks good to me.
> Thanks!
thanks for the help
How to co-ordinate posting and merging of these series?
Shall I repost mine based on yours?
Thanks,
John
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH blktests 0/7] Further stacked device atomic writes testing
2025-09-18 7:48 ` John Garry
@ 2025-09-18 10:37 ` Shinichiro Kawasaki
0 siblings, 0 replies; 25+ messages in thread
From: Shinichiro Kawasaki @ 2025-09-18 10:37 UTC (permalink / raw)
To: John Garry; +Cc: linux-block@vger.kernel.org
On Sep 18, 2025 / 08:48, John Garry wrote:
> On 18/09/2025 05:36, Shinichiro Kawasaki wrote:
> > On Sep 17, 2025 / 17:22, John Garry wrote:
[...]
> > > About TEST_CASE_DEV_ARRAY, is it scalable to index this per test case?
> >
> > I'm not exactly sure what you mean with the word "scalable", but I guess
> > you worry about many config lines of TEST_CASE_DEV_ARRAY[X]=Y for many test
> > cases with test_device_array().
>
> Yes
>
> > The series allows the keys X of
> > TEST_CASE_DEV_ARRAY can be regular expressions, I think many of the config
> > lines can be combined into single line when they use the same devices Y.
>
> How would that look then in the config file? More examples could help me see
> your idea.
One of the patches in my series added test cases from meta/020 to meta/024 to
confirm the test_device_array() feature. One line below in the config file below
can specify four NVME devices common to these five test cases
TEST_CASE_DEV_ARRAY[meta/02[0-4]]="/dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1"
If these four devices can be common with md/003, the line below can specify the
devices for all of md/003 and meta/020-024.
TEST_CASE_DEV_ARRAY[(md/003|meta/02[0-4])]="/dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1"
>
> >
> > BTW, I made the detailed comments on your patches. Other than the comments
> > and the adaptations to the test_device_array(), the series looks good to me.
> > Thanks!
>
> thanks for the help
>
> How to co-ordinate posting and merging of these series?
>
> Shall I repost mine based on yours?
Yes, I think that will work. I suggest to note in the cover letter that your
series should be applied on top of the test_device_array() support series.
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2025-09-18 10:37 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-12 9:57 [PATCH blktests 0/7] Further stacked device atomic writes testing John Garry
2025-09-12 9:57 ` [PATCH blktests 1/7] common/rc: add _min() John Garry
2025-09-18 4:08 ` Shinichiro Kawasaki
2025-09-18 7:33 ` John Garry
2025-09-12 9:57 ` [PATCH blktests 2/7] md/rc: add _md_atomics_test John Garry
2025-09-18 4:17 ` Shinichiro Kawasaki
2025-09-18 7:36 ` John Garry
2025-09-12 9:57 ` [PATCH blktests 3/7] md/002: convert to use _md_atomics_test John Garry
2025-09-12 9:57 ` [PATCH blktests 4/7] md/003: add NVMe atomic write tests for stacked devices John Garry
2025-09-18 4:27 ` Shinichiro Kawasaki
2025-09-18 7:44 ` John Garry
2025-09-12 9:57 ` [PATCH blktests 5/7] md/rc: test atomic writes for dm-linear John Garry
2025-09-12 9:57 ` [PATCH blktests 6/7] md/rc: test atomic writes for dm-stripe John Garry
2025-09-12 9:57 ` [PATCH blktests 7/7] md/rc: test atomic writes for dm-mirror John Garry
2025-09-16 8:55 ` [PATCH blktests 0/7] Further stacked device atomic writes testing Shinichiro Kawasaki
2025-09-16 10:20 ` John Garry
2025-09-16 11:55 ` John Garry
2025-09-16 12:23 ` Shinichiro Kawasaki
2025-09-16 12:27 ` John Garry
2025-09-17 12:02 ` Shinichiro Kawasaki
2025-09-17 13:12 ` John Garry
2025-09-17 16:22 ` John Garry
2025-09-18 4:36 ` Shinichiro Kawasaki
2025-09-18 7:48 ` John Garry
2025-09-18 10:37 ` Shinichiro Kawasaki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox