* [PATCH 0/4] btrfs: tests: zoned: add selftest for zoned code
@ 2026-01-23 12:59 Naohiro Aota
2026-01-23 12:59 ` [PATCH 1/4] btrfs: tests: add cleanup functions for test specific functions Naohiro Aota
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Naohiro Aota @ 2026-01-23 12:59 UTC (permalink / raw)
To: linux-btrfs; +Cc: Naohiro Aota
Having conventional zones on a RAID profile made the alloc_offset
loading code enough complex. It would be good time to add btrfs test for
the zoned code.
For now it tests btrfs_load_block_group_by_raid_type() with various test
cases. The load_zone_info_tests[] array defines the test cases.
Naohiro Aota (4):
btrfs: tests: add cleanup functions for test specific functions
btrfs: add cleanup function for btrfs_free_chunk_map
btrfs: zoned: factor out the zone loading part into a testable
function
btrfs: tests: zoned: add selftest for zoned code
fs/btrfs/Makefile | 2 +-
fs/btrfs/tests/btrfs-tests.c | 3 +
fs/btrfs/tests/btrfs-tests.h | 7 +
fs/btrfs/tests/zoned-tests.c | 676 +++++++++++++++++++++++++++++++++++
fs/btrfs/volumes.h | 1 +
fs/btrfs/zoned.c | 112 +++---
fs/btrfs/zoned.h | 9 +
7 files changed, 761 insertions(+), 49 deletions(-)
create mode 100644 fs/btrfs/tests/zoned-tests.c
--
2.52.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/4] btrfs: tests: add cleanup functions for test specific functions
2026-01-23 12:59 [PATCH 0/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
@ 2026-01-23 12:59 ` Naohiro Aota
2026-01-23 12:59 ` [PATCH 2/4] btrfs: add cleanup function for btrfs_free_chunk_map Naohiro Aota
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Naohiro Aota @ 2026-01-23 12:59 UTC (permalink / raw)
To: linux-btrfs; +Cc: Naohiro Aota
Add auto-cleanup helper functions for btrfs_free_dummy_fs_info and
btrfs_free_dummy_block_group.
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
fs/btrfs/tests/btrfs-tests.h | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/fs/btrfs/tests/btrfs-tests.h b/fs/btrfs/tests/btrfs-tests.h
index 4307bdaa6749..b61dbf93e9ed 100644
--- a/fs/btrfs/tests/btrfs-tests.h
+++ b/fs/btrfs/tests/btrfs-tests.h
@@ -9,6 +9,8 @@
#include <linux/types.h>
#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
+#include <linux/cleanup.h>
+
int btrfs_run_sanity_tests(void);
#define test_msg(fmt, ...) pr_info("BTRFS: selftest: " fmt "\n", ##__VA_ARGS__)
@@ -48,10 +50,14 @@ int btrfs_test_delayed_refs(u32 sectorsize, u32 nodesize);
struct inode *btrfs_new_test_inode(void);
struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize);
void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info);
+DEFINE_FREE(btrfs_free_dummy_fs_info, struct btrfs_fs_info *,
+ btrfs_free_dummy_fs_info(_T))
void btrfs_free_dummy_root(struct btrfs_root *root);
struct btrfs_block_group *
btrfs_alloc_dummy_block_group(struct btrfs_fs_info *fs_info, unsigned long length);
void btrfs_free_dummy_block_group(struct btrfs_block_group *cache);
+DEFINE_FREE(btrfs_free_dummy_block_group, struct btrfs_block_group *,
+ btrfs_free_dummy_block_group(_T));
void btrfs_init_dummy_trans(struct btrfs_trans_handle *trans,
struct btrfs_fs_info *fs_info);
void btrfs_init_dummy_transaction(struct btrfs_transaction *trans, struct btrfs_fs_info *fs_info);
--
2.52.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/4] btrfs: add cleanup function for btrfs_free_chunk_map
2026-01-23 12:59 [PATCH 0/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
2026-01-23 12:59 ` [PATCH 1/4] btrfs: tests: add cleanup functions for test specific functions Naohiro Aota
@ 2026-01-23 12:59 ` Naohiro Aota
2026-01-23 12:59 ` [PATCH 3/4] btrfs: zoned: factor out the zone loading part into a testable function Naohiro Aota
2026-01-23 12:59 ` [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
3 siblings, 0 replies; 9+ messages in thread
From: Naohiro Aota @ 2026-01-23 12:59 UTC (permalink / raw)
To: linux-btrfs; +Cc: Naohiro Aota
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
fs/btrfs/volumes.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
index e4644352314a..8b88a21b16aa 100644
--- a/fs/btrfs/volumes.h
+++ b/fs/btrfs/volumes.h
@@ -633,6 +633,7 @@ static inline void btrfs_free_chunk_map(struct btrfs_chunk_map *map)
kfree(map);
}
}
+DEFINE_FREE(btrfs_free_chunk_map, struct btrfs_chunk_map *, btrfs_free_chunk_map(_T))
struct btrfs_balance_control {
struct btrfs_balance_args data;
--
2.52.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/4] btrfs: zoned: factor out the zone loading part into a testable function
2026-01-23 12:59 [PATCH 0/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
2026-01-23 12:59 ` [PATCH 1/4] btrfs: tests: add cleanup functions for test specific functions Naohiro Aota
2026-01-23 12:59 ` [PATCH 2/4] btrfs: add cleanup function for btrfs_free_chunk_map Naohiro Aota
@ 2026-01-23 12:59 ` Naohiro Aota
2026-01-23 12:59 ` [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
3 siblings, 0 replies; 9+ messages in thread
From: Naohiro Aota @ 2026-01-23 12:59 UTC (permalink / raw)
To: linux-btrfs; +Cc: Naohiro Aota
Separate btrfs_load_block_group_* calling path into a function, so that it
can be an entry point of unit test.
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
fs/btrfs/zoned.c | 109 ++++++++++++++++++++++++++---------------------
fs/btrfs/zoned.h | 9 ++++
2 files changed, 70 insertions(+), 48 deletions(-)
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 576e8d3ef69c..052d6988ab8c 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -1812,6 +1812,65 @@ static int btrfs_load_block_group_raid10(struct btrfs_block_group *bg,
return 0;
}
+EXPORT_FOR_TESTS
+int btrfs_load_block_group_by_raid_type(struct btrfs_block_group *bg,
+ struct btrfs_chunk_map *map,
+ struct zone_info *zone_info,
+ unsigned long *active, u64 last_alloc)
+{
+ struct btrfs_fs_info *fs_info = bg->fs_info;
+ u64 profile;
+ int ret;
+
+ profile = map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK;
+ switch (profile) {
+ case 0: /* single */
+ ret = btrfs_load_block_group_single(bg, &zone_info[0], active);
+ break;
+ case BTRFS_BLOCK_GROUP_DUP:
+ ret = btrfs_load_block_group_dup(bg, map, zone_info, active, last_alloc);
+ break;
+ case BTRFS_BLOCK_GROUP_RAID1:
+ case BTRFS_BLOCK_GROUP_RAID1C3:
+ case BTRFS_BLOCK_GROUP_RAID1C4:
+ ret = btrfs_load_block_group_raid1(bg, map, zone_info, active,
+ last_alloc);
+ break;
+ case BTRFS_BLOCK_GROUP_RAID0:
+ ret = btrfs_load_block_group_raid0(bg, map, zone_info, active,
+ last_alloc);
+ break;
+ case BTRFS_BLOCK_GROUP_RAID10:
+ ret = btrfs_load_block_group_raid10(bg, map, zone_info, active,
+ last_alloc);
+ break;
+ case BTRFS_BLOCK_GROUP_RAID5:
+ case BTRFS_BLOCK_GROUP_RAID6:
+ default:
+ btrfs_err(fs_info, "zoned: profile %s not yet supported",
+ btrfs_bg_type_to_raid_name(map->type));
+ return -EINVAL;
+ }
+
+ if (ret == -EIO && profile != 0 && profile != BTRFS_BLOCK_GROUP_RAID0 &&
+ profile != BTRFS_BLOCK_GROUP_RAID10) {
+ /*
+ * Detected broken write pointer. Make this block group
+ * unallocatable by setting the allocation pointer at the end of
+ * allocatable region. Relocating this block group will fix the
+ * mismatch.
+ *
+ * Currently, we cannot handle RAID0 or RAID10 case like this
+ * because we don't have a proper zone_capacity value. But,
+ * reading from this block group won't work anyway by a missing
+ * stripe.
+ */
+ bg->alloc_offset = bg->zone_capacity;
+ }
+
+ return ret;
+}
+
int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
{
struct btrfs_fs_info *fs_info = cache->fs_info;
@@ -1824,7 +1883,6 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
unsigned long *active = NULL;
u64 last_alloc = 0;
u32 num_sequential = 0, num_conventional = 0;
- u64 profile;
if (!btrfs_is_zoned(fs_info))
return 0;
@@ -1884,53 +1942,8 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
}
}
- profile = map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK;
- switch (profile) {
- case 0: /* single */
- ret = btrfs_load_block_group_single(cache, &zone_info[0], active);
- break;
- case BTRFS_BLOCK_GROUP_DUP:
- ret = btrfs_load_block_group_dup(cache, map, zone_info, active,
- last_alloc);
- break;
- case BTRFS_BLOCK_GROUP_RAID1:
- case BTRFS_BLOCK_GROUP_RAID1C3:
- case BTRFS_BLOCK_GROUP_RAID1C4:
- ret = btrfs_load_block_group_raid1(cache, map, zone_info,
- active, last_alloc);
- break;
- case BTRFS_BLOCK_GROUP_RAID0:
- ret = btrfs_load_block_group_raid0(cache, map, zone_info,
- active, last_alloc);
- break;
- case BTRFS_BLOCK_GROUP_RAID10:
- ret = btrfs_load_block_group_raid10(cache, map, zone_info,
- active, last_alloc);
- break;
- case BTRFS_BLOCK_GROUP_RAID5:
- case BTRFS_BLOCK_GROUP_RAID6:
- default:
- btrfs_err(fs_info, "zoned: profile %s not yet supported",
- btrfs_bg_type_to_raid_name(map->type));
- ret = -EINVAL;
- goto out;
- }
-
- if (ret == -EIO && profile != 0 && profile != BTRFS_BLOCK_GROUP_RAID0 &&
- profile != BTRFS_BLOCK_GROUP_RAID10) {
- /*
- * Detected broken write pointer. Make this block group
- * unallocatable by setting the allocation pointer at the end of
- * allocatable region. Relocating this block group will fix the
- * mismatch.
- *
- * Currently, we cannot handle RAID0 or RAID10 case like this
- * because we don't have a proper zone_capacity value. But,
- * reading from this block group won't work anyway by a missing
- * stripe.
- */
- cache->alloc_offset = cache->zone_capacity;
- }
+ ret = btrfs_load_block_group_by_raid_type(cache, map, zone_info, active,
+ last_alloc);
out:
/* Reject non SINGLE data profiles without RST */
diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
index 2fdc88c6fa3c..8e21a836f858 100644
--- a/fs/btrfs/zoned.h
+++ b/fs/btrfs/zoned.h
@@ -99,6 +99,15 @@ void btrfs_check_active_zone_reservation(struct btrfs_fs_info *fs_info);
int btrfs_reset_unused_block_groups(struct btrfs_space_info *space_info, u64 num_bytes);
void btrfs_show_zoned_stats(struct btrfs_fs_info *fs_info, struct seq_file *seq);
+#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
+struct zone_info;
+
+int btrfs_load_block_group_by_raid_type(struct btrfs_block_group *bg,
+ struct btrfs_chunk_map *map,
+ struct zone_info *zone_info,
+ unsigned long *active, u64 last_alloc);
+#endif
+
#else /* CONFIG_BLK_DEV_ZONED */
static inline int btrfs_get_dev_zone_info_all_devices(struct btrfs_fs_info *fs_info)
--
2.52.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code
2026-01-23 12:59 [PATCH 0/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
` (2 preceding siblings ...)
2026-01-23 12:59 ` [PATCH 3/4] btrfs: zoned: factor out the zone loading part into a testable function Naohiro Aota
@ 2026-01-23 12:59 ` Naohiro Aota
2026-01-23 22:17 ` kernel test robot
` (3 more replies)
3 siblings, 4 replies; 9+ messages in thread
From: Naohiro Aota @ 2026-01-23 12:59 UTC (permalink / raw)
To: linux-btrfs; +Cc: Naohiro Aota
Add a test function for the zoned code, for now it tests
btrfs_load_block_group_by_raid_type() with various test cases. The
load_zone_info_tests[] array defines the test cases.
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
fs/btrfs/Makefile | 2 +-
fs/btrfs/tests/btrfs-tests.c | 3 +
fs/btrfs/tests/btrfs-tests.h | 1 +
fs/btrfs/tests/zoned-tests.c | 676 +++++++++++++++++++++++++++++++++++
fs/btrfs/zoned.c | 3 +
5 files changed, 684 insertions(+), 1 deletion(-)
create mode 100644 fs/btrfs/tests/zoned-tests.c
diff --git a/fs/btrfs/Makefile b/fs/btrfs/Makefile
index 743d7677b175..b3a12f558c2f 100644
--- a/fs/btrfs/Makefile
+++ b/fs/btrfs/Makefile
@@ -44,4 +44,4 @@ btrfs-$(CONFIG_BTRFS_FS_RUN_SANITY_TESTS) += tests/free-space-tests.o \
tests/extent-buffer-tests.o tests/btrfs-tests.o \
tests/extent-io-tests.o tests/inode-tests.o tests/qgroup-tests.o \
tests/free-space-tree-tests.o tests/extent-map-tests.o \
- tests/raid-stripe-tree-tests.o tests/delayed-refs-tests.o
+ tests/raid-stripe-tree-tests.o tests/delayed-refs-tests.o tests/zoned-tests.o
diff --git a/fs/btrfs/tests/btrfs-tests.c b/fs/btrfs/tests/btrfs-tests.c
index b576897d71cc..2933b487bd25 100644
--- a/fs/btrfs/tests/btrfs-tests.c
+++ b/fs/btrfs/tests/btrfs-tests.c
@@ -304,6 +304,9 @@ int btrfs_run_sanity_tests(void)
}
}
ret = btrfs_test_extent_map();
+ if (ret)
+ goto out;
+ ret = btrfs_test_zoned();
out:
btrfs_destroy_test_fs();
diff --git a/fs/btrfs/tests/btrfs-tests.h b/fs/btrfs/tests/btrfs-tests.h
index b61dbf93e9ed..479753777f26 100644
--- a/fs/btrfs/tests/btrfs-tests.h
+++ b/fs/btrfs/tests/btrfs-tests.h
@@ -47,6 +47,7 @@ int btrfs_test_free_space_tree(u32 sectorsize, u32 nodesize);
int btrfs_test_raid_stripe_tree(u32 sectorsize, u32 nodesize);
int btrfs_test_extent_map(void);
int btrfs_test_delayed_refs(u32 sectorsize, u32 nodesize);
+int btrfs_test_zoned(void);
struct inode *btrfs_new_test_inode(void);
struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize);
void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info);
diff --git a/fs/btrfs/tests/zoned-tests.c b/fs/btrfs/tests/zoned-tests.c
new file mode 100644
index 000000000000..b3454c7122bf
--- /dev/null
+++ b/fs/btrfs/tests/zoned-tests.c
@@ -0,0 +1,676 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2015 Facebook. All rights reserved.
+ */
+
+#include <linux/cleanup.h>
+#include <linux/sizes.h>
+
+#include "btrfs-tests.h"
+#include "../space-info.h"
+#include "../volumes.h"
+#include "../zoned.h"
+
+#define WP_MISSING_DEV ((u64)-1)
+#define WP_CONVENTIONAL ((u64)-2)
+#define ZONE_SIZE SZ_256M
+
+#define HALF_STRIPE_LEN (BTRFS_STRIPE_LEN >> 1)
+
+struct load_zone_info_test_vector {
+ u64 raid_type;
+ u64 num_stripes;
+ u64 alloc_offsets[8];
+ u64 last_alloc;
+ u64 bg_length;
+ bool degraded;
+
+ int expected_result;
+ u64 expected_alloc_offset;
+
+ const char *description;
+};
+
+struct zone_info {
+ u64 physical;
+ u64 capacity;
+ u64 alloc_offset;
+};
+
+static int test_load_zone_info(struct btrfs_fs_info *fs_info,
+ struct load_zone_info_test_vector *test)
+{
+ struct btrfs_block_group *bg __free(btrfs_free_dummy_block_group) = NULL;
+ struct btrfs_chunk_map *map __free(btrfs_free_chunk_map) = NULL;
+ struct zone_info AUTO_KFREE(zone_info);
+ unsigned long AUTO_KFREE(active);
+ int i, ret;
+
+ bg = btrfs_alloc_dummy_block_group(fs_info, test->bg_length);
+ if (!bg) {
+ test_std_err(TEST_ALLOC_BLOCK_GROUP);
+ return -ENOMEM;
+ }
+
+ map = btrfs_alloc_chunk_map(test->num_stripes, GFP_KERNEL);
+ if (!map) {
+ test_std_err(TEST_ALLOC_EXTENT_MAP);
+ return -ENOMEM;
+ }
+
+ zone_info = kcalloc(test->num_stripes, sizeof(*zone_info), GFP_KERNEL);
+ if (!zone_info) {
+ test_err("cannot allocate zone info");
+ return -ENOMEM;
+ }
+
+ active = bitmap_zalloc(test->num_stripes, GFP_KERNEL);
+ if (!zone_info) {
+ test_err("cannot allocate active bitmap");
+ return -ENOMEM;
+ }
+
+ map->type = test->raid_type;
+ map->num_stripes = test->num_stripes;
+ if (test->raid_type == BTRFS_BLOCK_GROUP_RAID10)
+ map->sub_stripes = 2;
+ for (i = 0; i < test->num_stripes; i++) {
+ zone_info[i].physical = 0;
+ zone_info[i].alloc_offset = test->alloc_offsets[i];
+ zone_info[i].capacity = ZONE_SIZE;
+ if (zone_info[i].alloc_offset && zone_info[i].alloc_offset < ZONE_SIZE)
+ __set_bit(i, active);
+ }
+ if (test->degraded)
+ btrfs_set_opt(fs_info->mount_opt, DEGRADED);
+ else
+ btrfs_clear_opt(fs_info->mount_opt, DEGRADED);
+
+ ret = btrfs_load_block_group_by_raid_type(bg, map, zone_info, active,
+ test->last_alloc);
+
+ if (ret != test->expected_result) {
+ test_err("unexpected return value: ret %d expected %d", ret,
+ test->expected_result);
+ return -EINVAL;
+ }
+
+ if (!ret && bg->alloc_offset != test->expected_alloc_offset) {
+ test_err("unexpected alloc_offset: alloc_offset %llu expected %llu",
+ bg->alloc_offset, test->expected_alloc_offset);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+struct load_zone_info_test_vector load_zone_info_tests[] = {
+ /* SINGLE */
+ {
+ .description = "SINGLE: load write pointer from sequential zone",
+ .raid_type = 0,
+ .num_stripes = 1,
+ .alloc_offsets = {
+ SZ_1M,
+ },
+ .expected_alloc_offset = SZ_1M,
+ },
+ /*
+ * SINGLE block group on a conventional zone sets last_alloc outside of
+ * btrfs_load_block_group_*(). Do not test that case.
+ */
+
+ /* DUP */
+ /* Normal case */
+ {
+ .description = "DUP: having matching write pointers",
+ .raid_type = BTRFS_BLOCK_GROUP_DUP,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, SZ_1M,
+ },
+ .expected_alloc_offset = SZ_1M,
+ },
+ /*
+ * One sequential zone and one conventional zone, having matching
+ * last_alloc.
+ */
+ {
+ .description = "DUP: seq zone and conv zone, matching last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_DUP,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_1M,
+ .expected_alloc_offset = SZ_1M,
+ },
+ /*
+ * One sequential and one conventional zone, but having smaller
+ * last_alloc than write pointer.
+ */
+ {
+ .description = "DUP: seq zone and conv zone, smaller last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_DUP,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = 0,
+ .expected_alloc_offset = SZ_1M,
+ },
+ /* Error case: having different write pointers. */
+ {
+ .description = "DUP: fail: different write pointers",
+ .raid_type = BTRFS_BLOCK_GROUP_DUP,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, SZ_2M,
+ },
+ .expected_result = -EIO,
+ },
+ /* Error case: partial missing device should not happen on DUP. */
+ {
+ .description = "DUP: fail: missing device",
+ .raid_type = BTRFS_BLOCK_GROUP_DUP,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_MISSING_DEV,
+ },
+ .expected_result = -EIO,
+ },
+ /*
+ * Error case: one sequential and one conventional zone, but having larger
+ * last_alloc than write pointer.
+ */
+ {
+ .description = "DUP: fail: seq zone and conv zone, larger last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_DUP,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_2M,
+ .expected_result = -EIO,
+ },
+
+ /* RAID1 */
+ /* Normal case */
+ {
+ .description = "RAID1: having matching write pointers",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID1,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, SZ_1M,
+ },
+ .expected_alloc_offset = SZ_1M,
+ },
+ /*
+ * One sequential zone and one conventional zone, having matching
+ * last_alloc.
+ */
+ {
+ .description = "RAID1: seq zone and conv zone, matching last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID1,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_1M,
+ .expected_alloc_offset = SZ_1M,
+ },
+ /*
+ * One sequential and one conventional zone, but having smaller
+ * last_alloc than write pointer.
+ */
+ {
+ .description = "RAID1: seq zone and conv zone, smaller last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID1,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = 0,
+ .expected_alloc_offset = SZ_1M,
+ },
+ /* Partial missing device should be recovered on DEGRADED mount */
+ {
+ .description = "RAID1: fail: missing device on DEGRADED",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID1,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_MISSING_DEV,
+ },
+ .degraded = true,
+ .expected_alloc_offset = SZ_1M,
+ },
+ /* Error case: having different write pointers. */
+ {
+ .description = "RAID1: fail: different write pointers",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID1,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, SZ_2M,
+ },
+ .expected_result = -EIO,
+ },
+ /*
+ * Partial missing device is not allowed on non-DEGRADED mount never happen
+ * as it is rejected beforehand.
+ */
+ /*
+ * Error case: one sequential and one conventional zone, but having larger
+ * last_alloc than write pointer.
+ */
+ {
+ .description = "RAID1: fail: seq zone and conv zone, larger last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID1,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_2M,
+ .expected_result = -EIO,
+ },
+
+ /* RAID0 */
+ /* Normal case */
+ {
+ .description = "RAID0: initial partial write",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ HALF_STRIPE_LEN, 0, 0, 0,
+ },
+ .expected_alloc_offset = HALF_STRIPE_LEN,
+ },
+ {
+ .description = "RAID0: while in second stripe",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN + HALF_STRIPE_LEN,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ },
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 5 + HALF_STRIPE_LEN,
+ },
+ {
+ .description = "RAID0: one stripe advanced",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M + BTRFS_STRIPE_LEN, SZ_1M,
+ },
+ .expected_alloc_offset = SZ_2M + BTRFS_STRIPE_LEN,
+ },
+ /* Error case: having different write pointers. */
+ {
+ .description = "RAID0: fail: disordered stripes",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN * 2,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ },
+ .expected_result = -EIO,
+ },
+ {
+ .description = "RAID0: fail: far distance",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 3, BTRFS_STRIPE_LEN,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ },
+ .expected_result = -EIO,
+ },
+ {
+ .description = "RAID0: fail: too many partial write",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ HALF_STRIPE_LEN, HALF_STRIPE_LEN, 0, 0,
+ },
+ .expected_result = -EIO,
+ },
+ /*
+ * Error case: Partial missing device is not allowed even on non-DEGRADED
+ * mount.
+ */
+ {
+ .description = "RAID0: fail: missing device on DEGRADED",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_MISSING_DEV,
+ },
+ .degraded = true,
+ .expected_result = -EIO,
+ },
+
+ /*
+ * One sequential zone and one conventional zone, having matching
+ * last_alloc.
+ */
+ {
+ .description = "RAID0: seq zone and conv zone, partially written stripe",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_2M - SZ_4K,
+ .expected_alloc_offset = SZ_2M - SZ_4K,
+ },
+ {
+ .description = "RAID0: conv zone and seq zone, partially written stripe",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ WP_CONVENTIONAL, SZ_1M,
+ },
+ .last_alloc = SZ_2M + SZ_4K,
+ .expected_alloc_offset = SZ_2M + SZ_4K,
+ },
+ /*
+ * Error case: one sequential and one conventional zone, but having larger
+ * last_alloc than write pointer.
+ */
+ {
+ .description = "RAID0: fail: seq zone and conv zone, larger last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_2M + BTRFS_STRIPE_LEN * 2,
+ .expected_result = -EIO,
+ },
+
+ /* RAID0, 4 stripes with seq zones and conv zones. */
+ {
+ .description = "RAID0: stripes [2, 2, ?, ?] last_alloc = 6",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 6,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 6,
+ },
+ {
+ .description = "RAID0: stripes [2, 2, ?, ?] last_alloc = 7.5",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 7 + HALF_STRIPE_LEN,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 7 + HALF_STRIPE_LEN,
+ },
+ {
+ .description = "RAID0: stripes [3, ?, ?, ?] last_alloc = 1",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 3, WP_CONVENTIONAL,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 9,
+ },
+ {
+ .description = "RAID0: stripes [2, ?, 1, ?] last_alloc = 5",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, WP_CONVENTIONAL,
+ BTRFS_STRIPE_LEN, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 5,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 5,
+ },
+ {
+ .description = "RAID0: fail: stripes [2, ?, 1, ?] last_alloc = 7",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, WP_CONVENTIONAL,
+ BTRFS_STRIPE_LEN, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 7,
+ .expected_result = -EIO,
+ },
+
+ /* RAID10 */
+ /* Normal case */
+ {
+ .description = "RAID10: initial partial write",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ HALF_STRIPE_LEN, HALF_STRIPE_LEN, 0, 0,
+ },
+ .expected_alloc_offset = HALF_STRIPE_LEN,
+ },
+ {
+ .description = "RAID10: while in second stripe",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ BTRFS_STRIPE_LEN + HALF_STRIPE_LEN,
+ BTRFS_STRIPE_LEN + HALF_STRIPE_LEN,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ },
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 5 + HALF_STRIPE_LEN,
+ },
+ {
+ .description = "RAID10: one stripe advanced",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ SZ_1M + BTRFS_STRIPE_LEN, SZ_1M + BTRFS_STRIPE_LEN,
+ SZ_1M, SZ_1M,
+ },
+ .expected_alloc_offset = SZ_2M + BTRFS_STRIPE_LEN,
+ },
+ {
+ .description = "RAID10: one stripe advanced, with conventional zone",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ SZ_1M + BTRFS_STRIPE_LEN, WP_CONVENTIONAL,
+ WP_CONVENTIONAL, SZ_1M,
+ },
+ .expected_alloc_offset = SZ_2M + BTRFS_STRIPE_LEN,
+ },
+ /* Error case: having different write pointers. */
+ {
+ .description = "RAID10: fail: disordered stripes",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ },
+ .expected_result = -EIO,
+ },
+ {
+ .description = "RAID10: fail: far distance",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 3, BTRFS_STRIPE_LEN * 3,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ },
+ .expected_result = -EIO,
+ },
+ {
+ .description = "RAID10: fail: too many partial write",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ HALF_STRIPE_LEN, HALF_STRIPE_LEN,
+ HALF_STRIPE_LEN, HALF_STRIPE_LEN,
+ 0, 0, 0, 0,
+ },
+ .expected_result = -EIO,
+ },
+ /*
+ * Error case: Partial missing device in RAID0 level is not allowed even on
+ * non-DEGRADED mount.
+ */
+ {
+ .description = "RAID10: fail: missing device on DEGRADED",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ SZ_1M, SZ_1M,
+ WP_MISSING_DEV, WP_MISSING_DEV,
+ },
+ .degraded = true,
+ .expected_result = -EIO,
+ },
+
+ /*
+ * One sequential zone and one conventional zone, having matching
+ * last_alloc.
+ */
+ {
+ .description = "RAID10: seq zone and conv zone, partially written stripe",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ SZ_1M, SZ_1M,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_2M - SZ_4K,
+ .expected_alloc_offset = SZ_2M - SZ_4K,
+ },
+ {
+ .description = "RAID10: conv zone and seq zone, partially written stripe",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ SZ_1M, SZ_1M,
+ },
+ .last_alloc = SZ_2M + SZ_4K,
+ .expected_alloc_offset = SZ_2M + SZ_4K,
+ },
+ /*
+ * Error case: one sequential and one conventional zone, but having larger
+ * last_alloc than write pointer.
+ */
+ {
+ .description = "RAID10: fail: seq zone and conv zone, larger last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ SZ_1M, SZ_1M,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_2M + BTRFS_STRIPE_LEN * 2,
+ .expected_result = -EIO,
+ },
+
+ /* RAID10, 4 stripes with seq zones and conv zones. */
+ {
+ .description = "RAID10: stripes [2, 2, ?, ?] last_alloc = 6",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 6,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 6,
+ },
+ {
+ .description = "RAID10: stripes [2, 2, ?, ?] last_alloc = 7.5",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 7 + HALF_STRIPE_LEN,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 7 + HALF_STRIPE_LEN,
+ },
+ {
+ .description = "RAID10: stripes [3, ?, ?, ?] last_alloc = 1",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 3, BTRFS_STRIPE_LEN * 3,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 9,
+ },
+ {
+ .description = "RAID10: stripes [2, ?, 1, ?] last_alloc = 5",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 5,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 5,
+ },
+ {
+ .description = "RAID10: fail: stripes [2, ?, 1, ?] last_alloc = 7",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 7,
+ .expected_result = -EIO,
+ },
+};
+
+int btrfs_test_zoned(void)
+{
+ struct btrfs_fs_info *fs_info __free(btrfs_free_dummy_fs_info) = NULL;
+ int ret;
+
+ test_msg("running zoned tests. error messages are expected.");
+
+ fs_info = btrfs_alloc_dummy_fs_info(PAGE_SIZE, PAGE_SIZE);
+ if (!fs_info) {
+ test_std_err(TEST_ALLOC_FS_INFO);
+ return -ENOMEM;
+ }
+
+ for (int i = 0; i < ARRAY_SIZE(load_zone_info_tests); i++) {
+ ret = test_load_zone_info(fs_info, &load_zone_info_tests[i]);
+ if (ret) {
+ test_err("test case \"%s\" failed",
+ load_zone_info_tests[i].description);
+ return ret;
+ }
+ }
+
+ return 0;
+}
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 052d6988ab8c..75351234eb36 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -2370,6 +2370,9 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
if (!btrfs_is_zoned(block_group->fs_info))
return true;
+ if (unlikely(btrfs_is_testing(fs_info)))
+ return true;
+
map = block_group->physical_map;
spin_lock(&fs_info->zone_active_bgs_lock);
--
2.52.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code
2026-01-23 12:59 ` [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
@ 2026-01-23 22:17 ` kernel test robot
2026-01-23 23:00 ` kernel test robot
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: kernel test robot @ 2026-01-23 22:17 UTC (permalink / raw)
To: Naohiro Aota, linux-btrfs; +Cc: oe-kbuild-all, Naohiro Aota
Hi Naohiro,
kernel test robot noticed the following build errors:
[auto build test ERROR on kdave/for-next]
[also build test ERROR on next-20260122]
[cannot apply to linus/master v6.19-rc6]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Naohiro-Aota/btrfs-tests-add-cleanup-functions-for-test-specific-functions/20260123-210300
base: https://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git for-next
patch link: https://lore.kernel.org/r/20260123125920.4129581-5-naohiro.aota%40wdc.com
patch subject: [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code
config: powerpc64-randconfig-002-20260124 (https://download.01.org/0day-ci/archive/20260124/202601240657.rNUgphBi-lkp@intel.com/config)
compiler: powerpc64-linux-gcc (GCC) 8.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260124/202601240657.rNUgphBi-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202601240657.rNUgphBi-lkp@intel.com/
All errors (new ones prefixed by >>):
fs/btrfs/tests/zoned-tests.c: In function 'test_load_zone_info':
>> fs/btrfs/tests/zoned-tests.c:89:8: error: implicit declaration of function 'btrfs_load_block_group_by_raid_type'; did you mean 'btrfs_load_block_group_zone_info'? [-Werror=implicit-function-declaration]
ret = btrfs_load_block_group_by_raid_type(bg, map, zone_info, active,
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
btrfs_load_block_group_zone_info
cc1: some warnings being treated as errors
vim +89 fs/btrfs/tests/zoned-tests.c
39
40 static int test_load_zone_info(struct btrfs_fs_info *fs_info,
41 struct load_zone_info_test_vector *test)
42 {
43 struct btrfs_block_group *bg __free(btrfs_free_dummy_block_group) = NULL;
44 struct btrfs_chunk_map *map __free(btrfs_free_chunk_map) = NULL;
45 struct zone_info AUTO_KFREE(zone_info);
46 unsigned long AUTO_KFREE(active);
47 int i, ret;
48
49 bg = btrfs_alloc_dummy_block_group(fs_info, test->bg_length);
50 if (!bg) {
51 test_std_err(TEST_ALLOC_BLOCK_GROUP);
52 return -ENOMEM;
53 }
54
55 map = btrfs_alloc_chunk_map(test->num_stripes, GFP_KERNEL);
56 if (!map) {
57 test_std_err(TEST_ALLOC_EXTENT_MAP);
58 return -ENOMEM;
59 }
60
61 zone_info = kcalloc(test->num_stripes, sizeof(*zone_info), GFP_KERNEL);
62 if (!zone_info) {
63 test_err("cannot allocate zone info");
64 return -ENOMEM;
65 }
66
67 active = bitmap_zalloc(test->num_stripes, GFP_KERNEL);
68 if (!zone_info) {
69 test_err("cannot allocate active bitmap");
70 return -ENOMEM;
71 }
72
73 map->type = test->raid_type;
74 map->num_stripes = test->num_stripes;
75 if (test->raid_type == BTRFS_BLOCK_GROUP_RAID10)
76 map->sub_stripes = 2;
77 for (i = 0; i < test->num_stripes; i++) {
78 zone_info[i].physical = 0;
79 zone_info[i].alloc_offset = test->alloc_offsets[i];
80 zone_info[i].capacity = ZONE_SIZE;
81 if (zone_info[i].alloc_offset && zone_info[i].alloc_offset < ZONE_SIZE)
82 __set_bit(i, active);
83 }
84 if (test->degraded)
85 btrfs_set_opt(fs_info->mount_opt, DEGRADED);
86 else
87 btrfs_clear_opt(fs_info->mount_opt, DEGRADED);
88
> 89 ret = btrfs_load_block_group_by_raid_type(bg, map, zone_info, active,
90 test->last_alloc);
91
92 if (ret != test->expected_result) {
93 test_err("unexpected return value: ret %d expected %d", ret,
94 test->expected_result);
95 return -EINVAL;
96 }
97
98 if (!ret && bg->alloc_offset != test->expected_alloc_offset) {
99 test_err("unexpected alloc_offset: alloc_offset %llu expected %llu",
100 bg->alloc_offset, test->expected_alloc_offset);
101 return -EINVAL;
102 }
103
104 return 0;
105 }
106
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code
2026-01-23 12:59 ` [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
2026-01-23 22:17 ` kernel test robot
@ 2026-01-23 23:00 ` kernel test robot
2026-01-24 1:58 ` kernel test robot
2026-01-24 14:22 ` kernel test robot
3 siblings, 0 replies; 9+ messages in thread
From: kernel test robot @ 2026-01-23 23:00 UTC (permalink / raw)
To: Naohiro Aota, linux-btrfs; +Cc: llvm, oe-kbuild-all, Naohiro Aota
Hi Naohiro,
kernel test robot noticed the following build errors:
[auto build test ERROR on kdave/for-next]
[also build test ERROR on next-20260122]
[cannot apply to linus/master v6.19-rc6]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Naohiro-Aota/btrfs-tests-add-cleanup-functions-for-test-specific-functions/20260123-210300
base: https://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git for-next
patch link: https://lore.kernel.org/r/20260123125920.4129581-5-naohiro.aota%40wdc.com
patch subject: [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code
config: x86_64-kexec (https://download.01.org/0day-ci/archive/20260124/202601240640.fgNkjFQF-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260124/202601240640.fgNkjFQF-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202601240640.fgNkjFQF-lkp@intel.com/
All errors (new ones prefixed by >>):
>> fs/btrfs/tests/zoned-tests.c:89:8: error: call to undeclared function 'btrfs_load_block_group_by_raid_type'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
89 | ret = btrfs_load_block_group_by_raid_type(bg, map, zone_info, active,
| ^
fs/btrfs/tests/zoned-tests.c:89:8: note: did you mean 'btrfs_load_block_group_zone_info'?
fs/btrfs/tests/../zoned.h:195:19: note: 'btrfs_load_block_group_zone_info' declared here
195 | static inline int btrfs_load_block_group_zone_info(
| ^
1 error generated.
vim +/btrfs_load_block_group_by_raid_type +89 fs/btrfs/tests/zoned-tests.c
39
40 static int test_load_zone_info(struct btrfs_fs_info *fs_info,
41 struct load_zone_info_test_vector *test)
42 {
43 struct btrfs_block_group *bg __free(btrfs_free_dummy_block_group) = NULL;
44 struct btrfs_chunk_map *map __free(btrfs_free_chunk_map) = NULL;
45 struct zone_info AUTO_KFREE(zone_info);
46 unsigned long AUTO_KFREE(active);
47 int i, ret;
48
49 bg = btrfs_alloc_dummy_block_group(fs_info, test->bg_length);
50 if (!bg) {
51 test_std_err(TEST_ALLOC_BLOCK_GROUP);
52 return -ENOMEM;
53 }
54
55 map = btrfs_alloc_chunk_map(test->num_stripes, GFP_KERNEL);
56 if (!map) {
57 test_std_err(TEST_ALLOC_EXTENT_MAP);
58 return -ENOMEM;
59 }
60
61 zone_info = kcalloc(test->num_stripes, sizeof(*zone_info), GFP_KERNEL);
62 if (!zone_info) {
63 test_err("cannot allocate zone info");
64 return -ENOMEM;
65 }
66
67 active = bitmap_zalloc(test->num_stripes, GFP_KERNEL);
68 if (!zone_info) {
69 test_err("cannot allocate active bitmap");
70 return -ENOMEM;
71 }
72
73 map->type = test->raid_type;
74 map->num_stripes = test->num_stripes;
75 if (test->raid_type == BTRFS_BLOCK_GROUP_RAID10)
76 map->sub_stripes = 2;
77 for (i = 0; i < test->num_stripes; i++) {
78 zone_info[i].physical = 0;
79 zone_info[i].alloc_offset = test->alloc_offsets[i];
80 zone_info[i].capacity = ZONE_SIZE;
81 if (zone_info[i].alloc_offset && zone_info[i].alloc_offset < ZONE_SIZE)
82 __set_bit(i, active);
83 }
84 if (test->degraded)
85 btrfs_set_opt(fs_info->mount_opt, DEGRADED);
86 else
87 btrfs_clear_opt(fs_info->mount_opt, DEGRADED);
88
> 89 ret = btrfs_load_block_group_by_raid_type(bg, map, zone_info, active,
90 test->last_alloc);
91
92 if (ret != test->expected_result) {
93 test_err("unexpected return value: ret %d expected %d", ret,
94 test->expected_result);
95 return -EINVAL;
96 }
97
98 if (!ret && bg->alloc_offset != test->expected_alloc_offset) {
99 test_err("unexpected alloc_offset: alloc_offset %llu expected %llu",
100 bg->alloc_offset, test->expected_alloc_offset);
101 return -EINVAL;
102 }
103
104 return 0;
105 }
106
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code
2026-01-23 12:59 ` [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
2026-01-23 22:17 ` kernel test robot
2026-01-23 23:00 ` kernel test robot
@ 2026-01-24 1:58 ` kernel test robot
2026-01-24 14:22 ` kernel test robot
3 siblings, 0 replies; 9+ messages in thread
From: kernel test robot @ 2026-01-24 1:58 UTC (permalink / raw)
To: Naohiro Aota, linux-btrfs; +Cc: llvm, oe-kbuild-all, Naohiro Aota
Hi Naohiro,
kernel test robot noticed the following build errors:
[auto build test ERROR on kdave/for-next]
[also build test ERROR on next-20260123]
[cannot apply to linus/master v6.16-rc1]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Naohiro-Aota/btrfs-tests-add-cleanup-functions-for-test-specific-functions/20260123-210300
base: https://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git for-next
patch link: https://lore.kernel.org/r/20260123125920.4129581-5-naohiro.aota%40wdc.com
patch subject: [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code
config: x86_64-kexec (https://download.01.org/0day-ci/archive/20260124/202601240254.ewdvMi5U-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260124/202601240254.ewdvMi5U-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202601240254.ewdvMi5U-lkp@intel.com/
All errors (new ones prefixed by >>):
>> fs/btrfs/tests/zoned-tests.c:89:8: error: call to undeclared function 'btrfs_load_block_group_by_raid_type'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
89 | ret = btrfs_load_block_group_by_raid_type(bg, map, zone_info, active,
| ^
fs/btrfs/tests/zoned-tests.c:89:8: note: did you mean 'btrfs_load_block_group_zone_info'?
fs/btrfs/tests/../zoned.h:195:19: note: 'btrfs_load_block_group_zone_info' declared here
195 | static inline int btrfs_load_block_group_zone_info(
| ^
1 error generated.
vim +/btrfs_load_block_group_by_raid_type +89 fs/btrfs/tests/zoned-tests.c
39
40 static int test_load_zone_info(struct btrfs_fs_info *fs_info,
41 struct load_zone_info_test_vector *test)
42 {
43 struct btrfs_block_group *bg __free(btrfs_free_dummy_block_group) = NULL;
44 struct btrfs_chunk_map *map __free(btrfs_free_chunk_map) = NULL;
45 struct zone_info AUTO_KFREE(zone_info);
46 unsigned long AUTO_KFREE(active);
47 int i, ret;
48
49 bg = btrfs_alloc_dummy_block_group(fs_info, test->bg_length);
50 if (!bg) {
51 test_std_err(TEST_ALLOC_BLOCK_GROUP);
52 return -ENOMEM;
53 }
54
55 map = btrfs_alloc_chunk_map(test->num_stripes, GFP_KERNEL);
56 if (!map) {
57 test_std_err(TEST_ALLOC_EXTENT_MAP);
58 return -ENOMEM;
59 }
60
61 zone_info = kcalloc(test->num_stripes, sizeof(*zone_info), GFP_KERNEL);
62 if (!zone_info) {
63 test_err("cannot allocate zone info");
64 return -ENOMEM;
65 }
66
67 active = bitmap_zalloc(test->num_stripes, GFP_KERNEL);
68 if (!zone_info) {
69 test_err("cannot allocate active bitmap");
70 return -ENOMEM;
71 }
72
73 map->type = test->raid_type;
74 map->num_stripes = test->num_stripes;
75 if (test->raid_type == BTRFS_BLOCK_GROUP_RAID10)
76 map->sub_stripes = 2;
77 for (i = 0; i < test->num_stripes; i++) {
78 zone_info[i].physical = 0;
79 zone_info[i].alloc_offset = test->alloc_offsets[i];
80 zone_info[i].capacity = ZONE_SIZE;
81 if (zone_info[i].alloc_offset && zone_info[i].alloc_offset < ZONE_SIZE)
82 __set_bit(i, active);
83 }
84 if (test->degraded)
85 btrfs_set_opt(fs_info->mount_opt, DEGRADED);
86 else
87 btrfs_clear_opt(fs_info->mount_opt, DEGRADED);
88
> 89 ret = btrfs_load_block_group_by_raid_type(bg, map, zone_info, active,
90 test->last_alloc);
91
92 if (ret != test->expected_result) {
93 test_err("unexpected return value: ret %d expected %d", ret,
94 test->expected_result);
95 return -EINVAL;
96 }
97
98 if (!ret && bg->alloc_offset != test->expected_alloc_offset) {
99 test_err("unexpected alloc_offset: alloc_offset %llu expected %llu",
100 bg->alloc_offset, test->expected_alloc_offset);
101 return -EINVAL;
102 }
103
104 return 0;
105 }
106
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code
2026-01-23 12:59 ` [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
` (2 preceding siblings ...)
2026-01-24 1:58 ` kernel test robot
@ 2026-01-24 14:22 ` kernel test robot
3 siblings, 0 replies; 9+ messages in thread
From: kernel test robot @ 2026-01-24 14:22 UTC (permalink / raw)
To: Naohiro Aota, linux-btrfs; +Cc: oe-kbuild-all, Naohiro Aota
Hi Naohiro,
kernel test robot noticed the following build warnings:
[auto build test WARNING on kdave/for-next]
[also build test WARNING on next-20260123]
[cannot apply to linus/master v6.19-rc6]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Naohiro-Aota/btrfs-tests-add-cleanup-functions-for-test-specific-functions/20260123-210300
base: https://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git for-next
patch link: https://lore.kernel.org/r/20260123125920.4129581-5-naohiro.aota%40wdc.com
patch subject: [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code
config: um-randconfig-r133-20260124 (https://download.01.org/0day-ci/archive/20260124/202601242218.d7G9ZivU-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.4.0-5) 12.4.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260124/202601242218.d7G9ZivU-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202601242218.d7G9ZivU-lkp@intel.com/
sparse warnings: (new ones prefixed by >>)
>> fs/btrfs/tests/zoned-tests.c:107:35: sparse: sparse: symbol 'load_zone_info_tests' was not declared. Should it be static?
vim +/load_zone_info_tests +107 fs/btrfs/tests/zoned-tests.c
106
> 107 struct load_zone_info_test_vector load_zone_info_tests[] = {
108 /* SINGLE */
109 {
110 .description = "SINGLE: load write pointer from sequential zone",
111 .raid_type = 0,
112 .num_stripes = 1,
113 .alloc_offsets = {
114 SZ_1M,
115 },
116 .expected_alloc_offset = SZ_1M,
117 },
118 /*
119 * SINGLE block group on a conventional zone sets last_alloc outside of
120 * btrfs_load_block_group_*(). Do not test that case.
121 */
122
123 /* DUP */
124 /* Normal case */
125 {
126 .description = "DUP: having matching write pointers",
127 .raid_type = BTRFS_BLOCK_GROUP_DUP,
128 .num_stripes = 2,
129 .alloc_offsets = {
130 SZ_1M, SZ_1M,
131 },
132 .expected_alloc_offset = SZ_1M,
133 },
134 /*
135 * One sequential zone and one conventional zone, having matching
136 * last_alloc.
137 */
138 {
139 .description = "DUP: seq zone and conv zone, matching last_alloc",
140 .raid_type = BTRFS_BLOCK_GROUP_DUP,
141 .num_stripes = 2,
142 .alloc_offsets = {
143 SZ_1M, WP_CONVENTIONAL,
144 },
145 .last_alloc = SZ_1M,
146 .expected_alloc_offset = SZ_1M,
147 },
148 /*
149 * One sequential and one conventional zone, but having smaller
150 * last_alloc than write pointer.
151 */
152 {
153 .description = "DUP: seq zone and conv zone, smaller last_alloc",
154 .raid_type = BTRFS_BLOCK_GROUP_DUP,
155 .num_stripes = 2,
156 .alloc_offsets = {
157 SZ_1M, WP_CONVENTIONAL,
158 },
159 .last_alloc = 0,
160 .expected_alloc_offset = SZ_1M,
161 },
162 /* Error case: having different write pointers. */
163 {
164 .description = "DUP: fail: different write pointers",
165 .raid_type = BTRFS_BLOCK_GROUP_DUP,
166 .num_stripes = 2,
167 .alloc_offsets = {
168 SZ_1M, SZ_2M,
169 },
170 .expected_result = -EIO,
171 },
172 /* Error case: partial missing device should not happen on DUP. */
173 {
174 .description = "DUP: fail: missing device",
175 .raid_type = BTRFS_BLOCK_GROUP_DUP,
176 .num_stripes = 2,
177 .alloc_offsets = {
178 SZ_1M, WP_MISSING_DEV,
179 },
180 .expected_result = -EIO,
181 },
182 /*
183 * Error case: one sequential and one conventional zone, but having larger
184 * last_alloc than write pointer.
185 */
186 {
187 .description = "DUP: fail: seq zone and conv zone, larger last_alloc",
188 .raid_type = BTRFS_BLOCK_GROUP_DUP,
189 .num_stripes = 2,
190 .alloc_offsets = {
191 SZ_1M, WP_CONVENTIONAL,
192 },
193 .last_alloc = SZ_2M,
194 .expected_result = -EIO,
195 },
196
197 /* RAID1 */
198 /* Normal case */
199 {
200 .description = "RAID1: having matching write pointers",
201 .raid_type = BTRFS_BLOCK_GROUP_RAID1,
202 .num_stripes = 2,
203 .alloc_offsets = {
204 SZ_1M, SZ_1M,
205 },
206 .expected_alloc_offset = SZ_1M,
207 },
208 /*
209 * One sequential zone and one conventional zone, having matching
210 * last_alloc.
211 */
212 {
213 .description = "RAID1: seq zone and conv zone, matching last_alloc",
214 .raid_type = BTRFS_BLOCK_GROUP_RAID1,
215 .num_stripes = 2,
216 .alloc_offsets = {
217 SZ_1M, WP_CONVENTIONAL,
218 },
219 .last_alloc = SZ_1M,
220 .expected_alloc_offset = SZ_1M,
221 },
222 /*
223 * One sequential and one conventional zone, but having smaller
224 * last_alloc than write pointer.
225 */
226 {
227 .description = "RAID1: seq zone and conv zone, smaller last_alloc",
228 .raid_type = BTRFS_BLOCK_GROUP_RAID1,
229 .num_stripes = 2,
230 .alloc_offsets = {
231 SZ_1M, WP_CONVENTIONAL,
232 },
233 .last_alloc = 0,
234 .expected_alloc_offset = SZ_1M,
235 },
236 /* Partial missing device should be recovered on DEGRADED mount */
237 {
238 .description = "RAID1: fail: missing device on DEGRADED",
239 .raid_type = BTRFS_BLOCK_GROUP_RAID1,
240 .num_stripes = 2,
241 .alloc_offsets = {
242 SZ_1M, WP_MISSING_DEV,
243 },
244 .degraded = true,
245 .expected_alloc_offset = SZ_1M,
246 },
247 /* Error case: having different write pointers. */
248 {
249 .description = "RAID1: fail: different write pointers",
250 .raid_type = BTRFS_BLOCK_GROUP_RAID1,
251 .num_stripes = 2,
252 .alloc_offsets = {
253 SZ_1M, SZ_2M,
254 },
255 .expected_result = -EIO,
256 },
257 /*
258 * Partial missing device is not allowed on non-DEGRADED mount never happen
259 * as it is rejected beforehand.
260 */
261 /*
262 * Error case: one sequential and one conventional zone, but having larger
263 * last_alloc than write pointer.
264 */
265 {
266 .description = "RAID1: fail: seq zone and conv zone, larger last_alloc",
267 .raid_type = BTRFS_BLOCK_GROUP_RAID1,
268 .num_stripes = 2,
269 .alloc_offsets = {
270 SZ_1M, WP_CONVENTIONAL,
271 },
272 .last_alloc = SZ_2M,
273 .expected_result = -EIO,
274 },
275
276 /* RAID0 */
277 /* Normal case */
278 {
279 .description = "RAID0: initial partial write",
280 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
281 .num_stripes = 4,
282 .alloc_offsets = {
283 HALF_STRIPE_LEN, 0, 0, 0,
284 },
285 .expected_alloc_offset = HALF_STRIPE_LEN,
286 },
287 {
288 .description = "RAID0: while in second stripe",
289 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
290 .num_stripes = 4,
291 .alloc_offsets = {
292 BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN + HALF_STRIPE_LEN,
293 BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
294 },
295 .expected_alloc_offset = BTRFS_STRIPE_LEN * 5 + HALF_STRIPE_LEN,
296 },
297 {
298 .description = "RAID0: one stripe advanced",
299 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
300 .num_stripes = 2,
301 .alloc_offsets = {
302 SZ_1M + BTRFS_STRIPE_LEN, SZ_1M,
303 },
304 .expected_alloc_offset = SZ_2M + BTRFS_STRIPE_LEN,
305 },
306 /* Error case: having different write pointers. */
307 {
308 .description = "RAID0: fail: disordered stripes",
309 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
310 .num_stripes = 4,
311 .alloc_offsets = {
312 BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN * 2,
313 BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
314 },
315 .expected_result = -EIO,
316 },
317 {
318 .description = "RAID0: fail: far distance",
319 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
320 .num_stripes = 4,
321 .alloc_offsets = {
322 BTRFS_STRIPE_LEN * 3, BTRFS_STRIPE_LEN,
323 BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
324 },
325 .expected_result = -EIO,
326 },
327 {
328 .description = "RAID0: fail: too many partial write",
329 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
330 .num_stripes = 4,
331 .alloc_offsets = {
332 HALF_STRIPE_LEN, HALF_STRIPE_LEN, 0, 0,
333 },
334 .expected_result = -EIO,
335 },
336 /*
337 * Error case: Partial missing device is not allowed even on non-DEGRADED
338 * mount.
339 */
340 {
341 .description = "RAID0: fail: missing device on DEGRADED",
342 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
343 .num_stripes = 2,
344 .alloc_offsets = {
345 SZ_1M, WP_MISSING_DEV,
346 },
347 .degraded = true,
348 .expected_result = -EIO,
349 },
350
351 /*
352 * One sequential zone and one conventional zone, having matching
353 * last_alloc.
354 */
355 {
356 .description = "RAID0: seq zone and conv zone, partially written stripe",
357 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
358 .num_stripes = 2,
359 .alloc_offsets = {
360 SZ_1M, WP_CONVENTIONAL,
361 },
362 .last_alloc = SZ_2M - SZ_4K,
363 .expected_alloc_offset = SZ_2M - SZ_4K,
364 },
365 {
366 .description = "RAID0: conv zone and seq zone, partially written stripe",
367 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
368 .num_stripes = 2,
369 .alloc_offsets = {
370 WP_CONVENTIONAL, SZ_1M,
371 },
372 .last_alloc = SZ_2M + SZ_4K,
373 .expected_alloc_offset = SZ_2M + SZ_4K,
374 },
375 /*
376 * Error case: one sequential and one conventional zone, but having larger
377 * last_alloc than write pointer.
378 */
379 {
380 .description = "RAID0: fail: seq zone and conv zone, larger last_alloc",
381 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
382 .num_stripes = 2,
383 .alloc_offsets = {
384 SZ_1M, WP_CONVENTIONAL,
385 },
386 .last_alloc = SZ_2M + BTRFS_STRIPE_LEN * 2,
387 .expected_result = -EIO,
388 },
389
390 /* RAID0, 4 stripes with seq zones and conv zones. */
391 {
392 .description = "RAID0: stripes [2, 2, ?, ?] last_alloc = 6",
393 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
394 .num_stripes = 4,
395 .alloc_offsets = {
396 BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
397 WP_CONVENTIONAL, WP_CONVENTIONAL,
398 },
399 .last_alloc = BTRFS_STRIPE_LEN * 6,
400 .expected_alloc_offset = BTRFS_STRIPE_LEN * 6,
401 },
402 {
403 .description = "RAID0: stripes [2, 2, ?, ?] last_alloc = 7.5",
404 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
405 .num_stripes = 4,
406 .alloc_offsets = {
407 BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
408 WP_CONVENTIONAL, WP_CONVENTIONAL,
409 },
410 .last_alloc = BTRFS_STRIPE_LEN * 7 + HALF_STRIPE_LEN,
411 .expected_alloc_offset = BTRFS_STRIPE_LEN * 7 + HALF_STRIPE_LEN,
412 },
413 {
414 .description = "RAID0: stripes [3, ?, ?, ?] last_alloc = 1",
415 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
416 .num_stripes = 4,
417 .alloc_offsets = {
418 BTRFS_STRIPE_LEN * 3, WP_CONVENTIONAL,
419 WP_CONVENTIONAL, WP_CONVENTIONAL,
420 },
421 .last_alloc = BTRFS_STRIPE_LEN,
422 .expected_alloc_offset = BTRFS_STRIPE_LEN * 9,
423 },
424 {
425 .description = "RAID0: stripes [2, ?, 1, ?] last_alloc = 5",
426 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
427 .num_stripes = 4,
428 .alloc_offsets = {
429 BTRFS_STRIPE_LEN * 2, WP_CONVENTIONAL,
430 BTRFS_STRIPE_LEN, WP_CONVENTIONAL,
431 },
432 .last_alloc = BTRFS_STRIPE_LEN * 5,
433 .expected_alloc_offset = BTRFS_STRIPE_LEN * 5,
434 },
435 {
436 .description = "RAID0: fail: stripes [2, ?, 1, ?] last_alloc = 7",
437 .raid_type = BTRFS_BLOCK_GROUP_RAID0,
438 .num_stripes = 4,
439 .alloc_offsets = {
440 BTRFS_STRIPE_LEN * 2, WP_CONVENTIONAL,
441 BTRFS_STRIPE_LEN, WP_CONVENTIONAL,
442 },
443 .last_alloc = BTRFS_STRIPE_LEN * 7,
444 .expected_result = -EIO,
445 },
446
447 /* RAID10 */
448 /* Normal case */
449 {
450 .description = "RAID10: initial partial write",
451 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
452 .num_stripes = 4,
453 .alloc_offsets = {
454 HALF_STRIPE_LEN, HALF_STRIPE_LEN, 0, 0,
455 },
456 .expected_alloc_offset = HALF_STRIPE_LEN,
457 },
458 {
459 .description = "RAID10: while in second stripe",
460 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
461 .num_stripes = 8,
462 .alloc_offsets = {
463 BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
464 BTRFS_STRIPE_LEN + HALF_STRIPE_LEN,
465 BTRFS_STRIPE_LEN + HALF_STRIPE_LEN,
466 BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
467 BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
468 },
469 .expected_alloc_offset = BTRFS_STRIPE_LEN * 5 + HALF_STRIPE_LEN,
470 },
471 {
472 .description = "RAID10: one stripe advanced",
473 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
474 .num_stripes = 4,
475 .alloc_offsets = {
476 SZ_1M + BTRFS_STRIPE_LEN, SZ_1M + BTRFS_STRIPE_LEN,
477 SZ_1M, SZ_1M,
478 },
479 .expected_alloc_offset = SZ_2M + BTRFS_STRIPE_LEN,
480 },
481 {
482 .description = "RAID10: one stripe advanced, with conventional zone",
483 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
484 .num_stripes = 4,
485 .alloc_offsets = {
486 SZ_1M + BTRFS_STRIPE_LEN, WP_CONVENTIONAL,
487 WP_CONVENTIONAL, SZ_1M,
488 },
489 .expected_alloc_offset = SZ_2M + BTRFS_STRIPE_LEN,
490 },
491 /* Error case: having different write pointers. */
492 {
493 .description = "RAID10: fail: disordered stripes",
494 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
495 .num_stripes = 8,
496 .alloc_offsets = {
497 BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
498 BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
499 BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
500 BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
501 },
502 .expected_result = -EIO,
503 },
504 {
505 .description = "RAID10: fail: far distance",
506 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
507 .num_stripes = 8,
508 .alloc_offsets = {
509 BTRFS_STRIPE_LEN * 3, BTRFS_STRIPE_LEN * 3,
510 BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
511 BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
512 BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
513 },
514 .expected_result = -EIO,
515 },
516 {
517 .description = "RAID10: fail: too many partial write",
518 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
519 .num_stripes = 8,
520 .alloc_offsets = {
521 HALF_STRIPE_LEN, HALF_STRIPE_LEN,
522 HALF_STRIPE_LEN, HALF_STRIPE_LEN,
523 0, 0, 0, 0,
524 },
525 .expected_result = -EIO,
526 },
527 /*
528 * Error case: Partial missing device in RAID0 level is not allowed even on
529 * non-DEGRADED mount.
530 */
531 {
532 .description = "RAID10: fail: missing device on DEGRADED",
533 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
534 .num_stripes = 4,
535 .alloc_offsets = {
536 SZ_1M, SZ_1M,
537 WP_MISSING_DEV, WP_MISSING_DEV,
538 },
539 .degraded = true,
540 .expected_result = -EIO,
541 },
542
543 /*
544 * One sequential zone and one conventional zone, having matching
545 * last_alloc.
546 */
547 {
548 .description = "RAID10: seq zone and conv zone, partially written stripe",
549 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
550 .num_stripes = 4,
551 .alloc_offsets = {
552 SZ_1M, SZ_1M,
553 WP_CONVENTIONAL, WP_CONVENTIONAL,
554 },
555 .last_alloc = SZ_2M - SZ_4K,
556 .expected_alloc_offset = SZ_2M - SZ_4K,
557 },
558 {
559 .description = "RAID10: conv zone and seq zone, partially written stripe",
560 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
561 .num_stripes = 4,
562 .alloc_offsets = {
563 WP_CONVENTIONAL, WP_CONVENTIONAL,
564 SZ_1M, SZ_1M,
565 },
566 .last_alloc = SZ_2M + SZ_4K,
567 .expected_alloc_offset = SZ_2M + SZ_4K,
568 },
569 /*
570 * Error case: one sequential and one conventional zone, but having larger
571 * last_alloc than write pointer.
572 */
573 {
574 .description = "RAID10: fail: seq zone and conv zone, larger last_alloc",
575 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
576 .num_stripes = 4,
577 .alloc_offsets = {
578 SZ_1M, SZ_1M,
579 WP_CONVENTIONAL, WP_CONVENTIONAL,
580 },
581 .last_alloc = SZ_2M + BTRFS_STRIPE_LEN * 2,
582 .expected_result = -EIO,
583 },
584
585 /* RAID10, 4 stripes with seq zones and conv zones. */
586 {
587 .description = "RAID10: stripes [2, 2, ?, ?] last_alloc = 6",
588 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
589 .num_stripes = 8,
590 .alloc_offsets = {
591 BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
592 BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
593 WP_CONVENTIONAL, WP_CONVENTIONAL,
594 WP_CONVENTIONAL, WP_CONVENTIONAL,
595 },
596 .last_alloc = BTRFS_STRIPE_LEN * 6,
597 .expected_alloc_offset = BTRFS_STRIPE_LEN * 6,
598 },
599 {
600 .description = "RAID10: stripes [2, 2, ?, ?] last_alloc = 7.5",
601 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
602 .num_stripes = 8,
603 .alloc_offsets = {
604 BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
605 BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
606 WP_CONVENTIONAL, WP_CONVENTIONAL,
607 WP_CONVENTIONAL, WP_CONVENTIONAL,
608 },
609 .last_alloc = BTRFS_STRIPE_LEN * 7 + HALF_STRIPE_LEN,
610 .expected_alloc_offset = BTRFS_STRIPE_LEN * 7 + HALF_STRIPE_LEN,
611 },
612 {
613 .description = "RAID10: stripes [3, ?, ?, ?] last_alloc = 1",
614 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
615 .num_stripes = 8,
616 .alloc_offsets = {
617 BTRFS_STRIPE_LEN * 3, BTRFS_STRIPE_LEN * 3,
618 WP_CONVENTIONAL, WP_CONVENTIONAL,
619 WP_CONVENTIONAL, WP_CONVENTIONAL,
620 WP_CONVENTIONAL, WP_CONVENTIONAL,
621 },
622 .last_alloc = BTRFS_STRIPE_LEN,
623 .expected_alloc_offset = BTRFS_STRIPE_LEN * 9,
624 },
625 {
626 .description = "RAID10: stripes [2, ?, 1, ?] last_alloc = 5",
627 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
628 .num_stripes = 8,
629 .alloc_offsets = {
630 BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
631 WP_CONVENTIONAL, WP_CONVENTIONAL,
632 BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
633 WP_CONVENTIONAL, WP_CONVENTIONAL,
634 },
635 .last_alloc = BTRFS_STRIPE_LEN * 5,
636 .expected_alloc_offset = BTRFS_STRIPE_LEN * 5,
637 },
638 {
639 .description = "RAID10: fail: stripes [2, ?, 1, ?] last_alloc = 7",
640 .raid_type = BTRFS_BLOCK_GROUP_RAID10,
641 .num_stripes = 8,
642 .alloc_offsets = {
643 BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
644 WP_CONVENTIONAL, WP_CONVENTIONAL,
645 BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
646 WP_CONVENTIONAL, WP_CONVENTIONAL,
647 },
648 .last_alloc = BTRFS_STRIPE_LEN * 7,
649 .expected_result = -EIO,
650 },
651 };
652
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-01-24 14:23 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-23 12:59 [PATCH 0/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
2026-01-23 12:59 ` [PATCH 1/4] btrfs: tests: add cleanup functions for test specific functions Naohiro Aota
2026-01-23 12:59 ` [PATCH 2/4] btrfs: add cleanup function for btrfs_free_chunk_map Naohiro Aota
2026-01-23 12:59 ` [PATCH 3/4] btrfs: zoned: factor out the zone loading part into a testable function Naohiro Aota
2026-01-23 12:59 ` [PATCH 4/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
2026-01-23 22:17 ` kernel test robot
2026-01-23 23:00 ` kernel test robot
2026-01-24 1:58 ` kernel test robot
2026-01-24 14:22 ` kernel test robot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox