* [PATCH v2 0/4] btrfs: tests: zoned: add selftest for zoned code
@ 2026-01-26 5:49 Naohiro Aota
2026-01-26 5:49 ` [PATCH v2 1/4] btrfs: tests: add cleanup functions for test specific functions Naohiro Aota
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Naohiro Aota @ 2026-01-26 5:49 UTC (permalink / raw)
To: linux-btrfs; +Cc: Naohiro Aota
Having conventional zones on a RAID profile made the alloc_offset
loading code enough complex. It would be good time to add btrfs test for
the zoned code.
For now it tests btrfs_load_block_group_by_raid_type() with various test
cases. The load_zone_info_tests[] array defines the test cases.
- v2:
- Fix compile error without CONFIG_BLK_DEV_ZONED
- v1: https://lore.kernel.org/linux-btrfs/20260123125920.4129581-1-naohiro.aota@wdc.com/
Naohiro Aota (4):
btrfs: tests: add cleanup functions for test specific functions
btrfs: add cleanup function for btrfs_free_chunk_map
btrfs: zoned: factor out the zone loading part into a testable
function
btrfs: tests: zoned: add selftest for zoned code
fs/btrfs/Makefile | 4 +
fs/btrfs/tests/btrfs-tests.c | 3 +
fs/btrfs/tests/btrfs-tests.h | 14 +
fs/btrfs/tests/zoned-tests.c | 676 +++++++++++++++++++++++++++++++++++
fs/btrfs/volumes.h | 1 +
fs/btrfs/zoned.c | 112 +++---
fs/btrfs/zoned.h | 9 +
7 files changed, 771 insertions(+), 48 deletions(-)
create mode 100644 fs/btrfs/tests/zoned-tests.c
--
2.52.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2 1/4] btrfs: tests: add cleanup functions for test specific functions
2026-01-26 5:49 [PATCH v2 0/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
@ 2026-01-26 5:49 ` Naohiro Aota
2026-01-26 5:49 ` [PATCH v2 2/4] btrfs: add cleanup function for btrfs_free_chunk_map Naohiro Aota
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Naohiro Aota @ 2026-01-26 5:49 UTC (permalink / raw)
To: linux-btrfs; +Cc: Naohiro Aota
Add auto-cleanup helper functions for btrfs_free_dummy_fs_info and
btrfs_free_dummy_block_group.
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
fs/btrfs/tests/btrfs-tests.h | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/fs/btrfs/tests/btrfs-tests.h b/fs/btrfs/tests/btrfs-tests.h
index 4307bdaa6749..b61dbf93e9ed 100644
--- a/fs/btrfs/tests/btrfs-tests.h
+++ b/fs/btrfs/tests/btrfs-tests.h
@@ -9,6 +9,8 @@
#include <linux/types.h>
#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
+#include <linux/cleanup.h>
+
int btrfs_run_sanity_tests(void);
#define test_msg(fmt, ...) pr_info("BTRFS: selftest: " fmt "\n", ##__VA_ARGS__)
@@ -48,10 +50,14 @@ int btrfs_test_delayed_refs(u32 sectorsize, u32 nodesize);
struct inode *btrfs_new_test_inode(void);
struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize);
void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info);
+DEFINE_FREE(btrfs_free_dummy_fs_info, struct btrfs_fs_info *,
+ btrfs_free_dummy_fs_info(_T))
void btrfs_free_dummy_root(struct btrfs_root *root);
struct btrfs_block_group *
btrfs_alloc_dummy_block_group(struct btrfs_fs_info *fs_info, unsigned long length);
void btrfs_free_dummy_block_group(struct btrfs_block_group *cache);
+DEFINE_FREE(btrfs_free_dummy_block_group, struct btrfs_block_group *,
+ btrfs_free_dummy_block_group(_T));
void btrfs_init_dummy_trans(struct btrfs_trans_handle *trans,
struct btrfs_fs_info *fs_info);
void btrfs_init_dummy_transaction(struct btrfs_transaction *trans, struct btrfs_fs_info *fs_info);
--
2.52.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 2/4] btrfs: add cleanup function for btrfs_free_chunk_map
2026-01-26 5:49 [PATCH v2 0/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
2026-01-26 5:49 ` [PATCH v2 1/4] btrfs: tests: add cleanup functions for test specific functions Naohiro Aota
@ 2026-01-26 5:49 ` Naohiro Aota
2026-01-26 5:49 ` [PATCH v2 3/4] btrfs: zoned: factor out the zone loading part into a testable function Naohiro Aota
2026-01-26 5:49 ` [PATCH v2 4/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
3 siblings, 0 replies; 9+ messages in thread
From: Naohiro Aota @ 2026-01-26 5:49 UTC (permalink / raw)
To: linux-btrfs; +Cc: Naohiro Aota
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
fs/btrfs/volumes.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
index e4644352314a..8b88a21b16aa 100644
--- a/fs/btrfs/volumes.h
+++ b/fs/btrfs/volumes.h
@@ -633,6 +633,7 @@ static inline void btrfs_free_chunk_map(struct btrfs_chunk_map *map)
kfree(map);
}
}
+DEFINE_FREE(btrfs_free_chunk_map, struct btrfs_chunk_map *, btrfs_free_chunk_map(_T))
struct btrfs_balance_control {
struct btrfs_balance_args data;
--
2.52.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 3/4] btrfs: zoned: factor out the zone loading part into a testable function
2026-01-26 5:49 [PATCH v2 0/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
2026-01-26 5:49 ` [PATCH v2 1/4] btrfs: tests: add cleanup functions for test specific functions Naohiro Aota
2026-01-26 5:49 ` [PATCH v2 2/4] btrfs: add cleanup function for btrfs_free_chunk_map Naohiro Aota
@ 2026-01-26 5:49 ` Naohiro Aota
2026-01-26 9:51 ` Johannes Thumshirn
2026-01-26 5:49 ` [PATCH v2 4/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
3 siblings, 1 reply; 9+ messages in thread
From: Naohiro Aota @ 2026-01-26 5:49 UTC (permalink / raw)
To: linux-btrfs; +Cc: Naohiro Aota
Separate btrfs_load_block_group_* calling path into a function, so that it
can be an entry point of unit test.
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
fs/btrfs/zoned.c | 109 ++++++++++++++++++++++++++---------------------
fs/btrfs/zoned.h | 9 ++++
2 files changed, 70 insertions(+), 48 deletions(-)
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 576e8d3ef69c..052d6988ab8c 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -1812,6 +1812,65 @@ static int btrfs_load_block_group_raid10(struct btrfs_block_group *bg,
return 0;
}
+EXPORT_FOR_TESTS
+int btrfs_load_block_group_by_raid_type(struct btrfs_block_group *bg,
+ struct btrfs_chunk_map *map,
+ struct zone_info *zone_info,
+ unsigned long *active, u64 last_alloc)
+{
+ struct btrfs_fs_info *fs_info = bg->fs_info;
+ u64 profile;
+ int ret;
+
+ profile = map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK;
+ switch (profile) {
+ case 0: /* single */
+ ret = btrfs_load_block_group_single(bg, &zone_info[0], active);
+ break;
+ case BTRFS_BLOCK_GROUP_DUP:
+ ret = btrfs_load_block_group_dup(bg, map, zone_info, active, last_alloc);
+ break;
+ case BTRFS_BLOCK_GROUP_RAID1:
+ case BTRFS_BLOCK_GROUP_RAID1C3:
+ case BTRFS_BLOCK_GROUP_RAID1C4:
+ ret = btrfs_load_block_group_raid1(bg, map, zone_info, active,
+ last_alloc);
+ break;
+ case BTRFS_BLOCK_GROUP_RAID0:
+ ret = btrfs_load_block_group_raid0(bg, map, zone_info, active,
+ last_alloc);
+ break;
+ case BTRFS_BLOCK_GROUP_RAID10:
+ ret = btrfs_load_block_group_raid10(bg, map, zone_info, active,
+ last_alloc);
+ break;
+ case BTRFS_BLOCK_GROUP_RAID5:
+ case BTRFS_BLOCK_GROUP_RAID6:
+ default:
+ btrfs_err(fs_info, "zoned: profile %s not yet supported",
+ btrfs_bg_type_to_raid_name(map->type));
+ return -EINVAL;
+ }
+
+ if (ret == -EIO && profile != 0 && profile != BTRFS_BLOCK_GROUP_RAID0 &&
+ profile != BTRFS_BLOCK_GROUP_RAID10) {
+ /*
+ * Detected broken write pointer. Make this block group
+ * unallocatable by setting the allocation pointer at the end of
+ * allocatable region. Relocating this block group will fix the
+ * mismatch.
+ *
+ * Currently, we cannot handle RAID0 or RAID10 case like this
+ * because we don't have a proper zone_capacity value. But,
+ * reading from this block group won't work anyway by a missing
+ * stripe.
+ */
+ bg->alloc_offset = bg->zone_capacity;
+ }
+
+ return ret;
+}
+
int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
{
struct btrfs_fs_info *fs_info = cache->fs_info;
@@ -1824,7 +1883,6 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
unsigned long *active = NULL;
u64 last_alloc = 0;
u32 num_sequential = 0, num_conventional = 0;
- u64 profile;
if (!btrfs_is_zoned(fs_info))
return 0;
@@ -1884,53 +1942,8 @@ int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache, bool new)
}
}
- profile = map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK;
- switch (profile) {
- case 0: /* single */
- ret = btrfs_load_block_group_single(cache, &zone_info[0], active);
- break;
- case BTRFS_BLOCK_GROUP_DUP:
- ret = btrfs_load_block_group_dup(cache, map, zone_info, active,
- last_alloc);
- break;
- case BTRFS_BLOCK_GROUP_RAID1:
- case BTRFS_BLOCK_GROUP_RAID1C3:
- case BTRFS_BLOCK_GROUP_RAID1C4:
- ret = btrfs_load_block_group_raid1(cache, map, zone_info,
- active, last_alloc);
- break;
- case BTRFS_BLOCK_GROUP_RAID0:
- ret = btrfs_load_block_group_raid0(cache, map, zone_info,
- active, last_alloc);
- break;
- case BTRFS_BLOCK_GROUP_RAID10:
- ret = btrfs_load_block_group_raid10(cache, map, zone_info,
- active, last_alloc);
- break;
- case BTRFS_BLOCK_GROUP_RAID5:
- case BTRFS_BLOCK_GROUP_RAID6:
- default:
- btrfs_err(fs_info, "zoned: profile %s not yet supported",
- btrfs_bg_type_to_raid_name(map->type));
- ret = -EINVAL;
- goto out;
- }
-
- if (ret == -EIO && profile != 0 && profile != BTRFS_BLOCK_GROUP_RAID0 &&
- profile != BTRFS_BLOCK_GROUP_RAID10) {
- /*
- * Detected broken write pointer. Make this block group
- * unallocatable by setting the allocation pointer at the end of
- * allocatable region. Relocating this block group will fix the
- * mismatch.
- *
- * Currently, we cannot handle RAID0 or RAID10 case like this
- * because we don't have a proper zone_capacity value. But,
- * reading from this block group won't work anyway by a missing
- * stripe.
- */
- cache->alloc_offset = cache->zone_capacity;
- }
+ ret = btrfs_load_block_group_by_raid_type(cache, map, zone_info, active,
+ last_alloc);
out:
/* Reject non SINGLE data profiles without RST */
diff --git a/fs/btrfs/zoned.h b/fs/btrfs/zoned.h
index 2fdc88c6fa3c..8e21a836f858 100644
--- a/fs/btrfs/zoned.h
+++ b/fs/btrfs/zoned.h
@@ -99,6 +99,15 @@ void btrfs_check_active_zone_reservation(struct btrfs_fs_info *fs_info);
int btrfs_reset_unused_block_groups(struct btrfs_space_info *space_info, u64 num_bytes);
void btrfs_show_zoned_stats(struct btrfs_fs_info *fs_info, struct seq_file *seq);
+#ifdef CONFIG_BTRFS_FS_RUN_SANITY_TESTS
+struct zone_info;
+
+int btrfs_load_block_group_by_raid_type(struct btrfs_block_group *bg,
+ struct btrfs_chunk_map *map,
+ struct zone_info *zone_info,
+ unsigned long *active, u64 last_alloc);
+#endif
+
#else /* CONFIG_BLK_DEV_ZONED */
static inline int btrfs_get_dev_zone_info_all_devices(struct btrfs_fs_info *fs_info)
--
2.52.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 4/4] btrfs: tests: zoned: add selftest for zoned code
2026-01-26 5:49 [PATCH v2 0/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
` (2 preceding siblings ...)
2026-01-26 5:49 ` [PATCH v2 3/4] btrfs: zoned: factor out the zone loading part into a testable function Naohiro Aota
@ 2026-01-26 5:49 ` Naohiro Aota
2026-01-26 9:36 ` Johannes Thumshirn
2026-02-03 6:27 ` David Sterba
3 siblings, 2 replies; 9+ messages in thread
From: Naohiro Aota @ 2026-01-26 5:49 UTC (permalink / raw)
To: linux-btrfs; +Cc: Naohiro Aota
Add a test function for the zoned code, for now it tests
btrfs_load_block_group_by_raid_type() with various test cases. The
load_zone_info_tests[] array defines the test cases.
Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
---
fs/btrfs/Makefile | 4 +
fs/btrfs/tests/btrfs-tests.c | 3 +
fs/btrfs/tests/btrfs-tests.h | 8 +
fs/btrfs/tests/zoned-tests.c | 676 +++++++++++++++++++++++++++++++++++
fs/btrfs/zoned.c | 3 +
5 files changed, 694 insertions(+)
create mode 100644 fs/btrfs/tests/zoned-tests.c
diff --git a/fs/btrfs/Makefile b/fs/btrfs/Makefile
index 743d7677b175..875740376ef1 100644
--- a/fs/btrfs/Makefile
+++ b/fs/btrfs/Makefile
@@ -45,3 +45,7 @@ btrfs-$(CONFIG_BTRFS_FS_RUN_SANITY_TESTS) += tests/free-space-tests.o \
tests/extent-io-tests.o tests/inode-tests.o tests/qgroup-tests.o \
tests/free-space-tree-tests.o tests/extent-map-tests.o \
tests/raid-stripe-tree-tests.o tests/delayed-refs-tests.o
+
+ifeq ($(CONFIG_BLK_DEV_ZONED),y)
+btrfs-$(CONFIG_BTRFS_FS_RUN_SANITY_TESTS) += tests/zoned-tests.o
+endif
diff --git a/fs/btrfs/tests/btrfs-tests.c b/fs/btrfs/tests/btrfs-tests.c
index b576897d71cc..2933b487bd25 100644
--- a/fs/btrfs/tests/btrfs-tests.c
+++ b/fs/btrfs/tests/btrfs-tests.c
@@ -304,6 +304,9 @@ int btrfs_run_sanity_tests(void)
}
}
ret = btrfs_test_extent_map();
+ if (ret)
+ goto out;
+ ret = btrfs_test_zoned();
out:
btrfs_destroy_test_fs();
diff --git a/fs/btrfs/tests/btrfs-tests.h b/fs/btrfs/tests/btrfs-tests.h
index b61dbf93e9ed..0a73d332c6ce 100644
--- a/fs/btrfs/tests/btrfs-tests.h
+++ b/fs/btrfs/tests/btrfs-tests.h
@@ -47,6 +47,14 @@ int btrfs_test_free_space_tree(u32 sectorsize, u32 nodesize);
int btrfs_test_raid_stripe_tree(u32 sectorsize, u32 nodesize);
int btrfs_test_extent_map(void);
int btrfs_test_delayed_refs(u32 sectorsize, u32 nodesize);
+#ifdef CONFIG_BLK_DEV_ZONED
+int btrfs_test_zoned(void);
+#else
+static inline int btrfs_test_zoned(void)
+{
+ return 0;
+}
+#endif
struct inode *btrfs_new_test_inode(void);
struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize);
void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info);
diff --git a/fs/btrfs/tests/zoned-tests.c b/fs/btrfs/tests/zoned-tests.c
new file mode 100644
index 000000000000..b3454c7122bf
--- /dev/null
+++ b/fs/btrfs/tests/zoned-tests.c
@@ -0,0 +1,676 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2015 Facebook. All rights reserved.
+ */
+
+#include <linux/cleanup.h>
+#include <linux/sizes.h>
+
+#include "btrfs-tests.h"
+#include "../space-info.h"
+#include "../volumes.h"
+#include "../zoned.h"
+
+#define WP_MISSING_DEV ((u64)-1)
+#define WP_CONVENTIONAL ((u64)-2)
+#define ZONE_SIZE SZ_256M
+
+#define HALF_STRIPE_LEN (BTRFS_STRIPE_LEN >> 1)
+
+struct load_zone_info_test_vector {
+ u64 raid_type;
+ u64 num_stripes;
+ u64 alloc_offsets[8];
+ u64 last_alloc;
+ u64 bg_length;
+ bool degraded;
+
+ int expected_result;
+ u64 expected_alloc_offset;
+
+ const char *description;
+};
+
+struct zone_info {
+ u64 physical;
+ u64 capacity;
+ u64 alloc_offset;
+};
+
+static int test_load_zone_info(struct btrfs_fs_info *fs_info,
+ struct load_zone_info_test_vector *test)
+{
+ struct btrfs_block_group *bg __free(btrfs_free_dummy_block_group) = NULL;
+ struct btrfs_chunk_map *map __free(btrfs_free_chunk_map) = NULL;
+ struct zone_info AUTO_KFREE(zone_info);
+ unsigned long AUTO_KFREE(active);
+ int i, ret;
+
+ bg = btrfs_alloc_dummy_block_group(fs_info, test->bg_length);
+ if (!bg) {
+ test_std_err(TEST_ALLOC_BLOCK_GROUP);
+ return -ENOMEM;
+ }
+
+ map = btrfs_alloc_chunk_map(test->num_stripes, GFP_KERNEL);
+ if (!map) {
+ test_std_err(TEST_ALLOC_EXTENT_MAP);
+ return -ENOMEM;
+ }
+
+ zone_info = kcalloc(test->num_stripes, sizeof(*zone_info), GFP_KERNEL);
+ if (!zone_info) {
+ test_err("cannot allocate zone info");
+ return -ENOMEM;
+ }
+
+ active = bitmap_zalloc(test->num_stripes, GFP_KERNEL);
+ if (!zone_info) {
+ test_err("cannot allocate active bitmap");
+ return -ENOMEM;
+ }
+
+ map->type = test->raid_type;
+ map->num_stripes = test->num_stripes;
+ if (test->raid_type == BTRFS_BLOCK_GROUP_RAID10)
+ map->sub_stripes = 2;
+ for (i = 0; i < test->num_stripes; i++) {
+ zone_info[i].physical = 0;
+ zone_info[i].alloc_offset = test->alloc_offsets[i];
+ zone_info[i].capacity = ZONE_SIZE;
+ if (zone_info[i].alloc_offset && zone_info[i].alloc_offset < ZONE_SIZE)
+ __set_bit(i, active);
+ }
+ if (test->degraded)
+ btrfs_set_opt(fs_info->mount_opt, DEGRADED);
+ else
+ btrfs_clear_opt(fs_info->mount_opt, DEGRADED);
+
+ ret = btrfs_load_block_group_by_raid_type(bg, map, zone_info, active,
+ test->last_alloc);
+
+ if (ret != test->expected_result) {
+ test_err("unexpected return value: ret %d expected %d", ret,
+ test->expected_result);
+ return -EINVAL;
+ }
+
+ if (!ret && bg->alloc_offset != test->expected_alloc_offset) {
+ test_err("unexpected alloc_offset: alloc_offset %llu expected %llu",
+ bg->alloc_offset, test->expected_alloc_offset);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+struct load_zone_info_test_vector load_zone_info_tests[] = {
+ /* SINGLE */
+ {
+ .description = "SINGLE: load write pointer from sequential zone",
+ .raid_type = 0,
+ .num_stripes = 1,
+ .alloc_offsets = {
+ SZ_1M,
+ },
+ .expected_alloc_offset = SZ_1M,
+ },
+ /*
+ * SINGLE block group on a conventional zone sets last_alloc outside of
+ * btrfs_load_block_group_*(). Do not test that case.
+ */
+
+ /* DUP */
+ /* Normal case */
+ {
+ .description = "DUP: having matching write pointers",
+ .raid_type = BTRFS_BLOCK_GROUP_DUP,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, SZ_1M,
+ },
+ .expected_alloc_offset = SZ_1M,
+ },
+ /*
+ * One sequential zone and one conventional zone, having matching
+ * last_alloc.
+ */
+ {
+ .description = "DUP: seq zone and conv zone, matching last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_DUP,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_1M,
+ .expected_alloc_offset = SZ_1M,
+ },
+ /*
+ * One sequential and one conventional zone, but having smaller
+ * last_alloc than write pointer.
+ */
+ {
+ .description = "DUP: seq zone and conv zone, smaller last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_DUP,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = 0,
+ .expected_alloc_offset = SZ_1M,
+ },
+ /* Error case: having different write pointers. */
+ {
+ .description = "DUP: fail: different write pointers",
+ .raid_type = BTRFS_BLOCK_GROUP_DUP,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, SZ_2M,
+ },
+ .expected_result = -EIO,
+ },
+ /* Error case: partial missing device should not happen on DUP. */
+ {
+ .description = "DUP: fail: missing device",
+ .raid_type = BTRFS_BLOCK_GROUP_DUP,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_MISSING_DEV,
+ },
+ .expected_result = -EIO,
+ },
+ /*
+ * Error case: one sequential and one conventional zone, but having larger
+ * last_alloc than write pointer.
+ */
+ {
+ .description = "DUP: fail: seq zone and conv zone, larger last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_DUP,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_2M,
+ .expected_result = -EIO,
+ },
+
+ /* RAID1 */
+ /* Normal case */
+ {
+ .description = "RAID1: having matching write pointers",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID1,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, SZ_1M,
+ },
+ .expected_alloc_offset = SZ_1M,
+ },
+ /*
+ * One sequential zone and one conventional zone, having matching
+ * last_alloc.
+ */
+ {
+ .description = "RAID1: seq zone and conv zone, matching last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID1,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_1M,
+ .expected_alloc_offset = SZ_1M,
+ },
+ /*
+ * One sequential and one conventional zone, but having smaller
+ * last_alloc than write pointer.
+ */
+ {
+ .description = "RAID1: seq zone and conv zone, smaller last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID1,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = 0,
+ .expected_alloc_offset = SZ_1M,
+ },
+ /* Partial missing device should be recovered on DEGRADED mount */
+ {
+ .description = "RAID1: fail: missing device on DEGRADED",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID1,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_MISSING_DEV,
+ },
+ .degraded = true,
+ .expected_alloc_offset = SZ_1M,
+ },
+ /* Error case: having different write pointers. */
+ {
+ .description = "RAID1: fail: different write pointers",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID1,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, SZ_2M,
+ },
+ .expected_result = -EIO,
+ },
+ /*
+ * Partial missing device is not allowed on non-DEGRADED mount never happen
+ * as it is rejected beforehand.
+ */
+ /*
+ * Error case: one sequential and one conventional zone, but having larger
+ * last_alloc than write pointer.
+ */
+ {
+ .description = "RAID1: fail: seq zone and conv zone, larger last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID1,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_2M,
+ .expected_result = -EIO,
+ },
+
+ /* RAID0 */
+ /* Normal case */
+ {
+ .description = "RAID0: initial partial write",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ HALF_STRIPE_LEN, 0, 0, 0,
+ },
+ .expected_alloc_offset = HALF_STRIPE_LEN,
+ },
+ {
+ .description = "RAID0: while in second stripe",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN + HALF_STRIPE_LEN,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ },
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 5 + HALF_STRIPE_LEN,
+ },
+ {
+ .description = "RAID0: one stripe advanced",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M + BTRFS_STRIPE_LEN, SZ_1M,
+ },
+ .expected_alloc_offset = SZ_2M + BTRFS_STRIPE_LEN,
+ },
+ /* Error case: having different write pointers. */
+ {
+ .description = "RAID0: fail: disordered stripes",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN * 2,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ },
+ .expected_result = -EIO,
+ },
+ {
+ .description = "RAID0: fail: far distance",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 3, BTRFS_STRIPE_LEN,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ },
+ .expected_result = -EIO,
+ },
+ {
+ .description = "RAID0: fail: too many partial write",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ HALF_STRIPE_LEN, HALF_STRIPE_LEN, 0, 0,
+ },
+ .expected_result = -EIO,
+ },
+ /*
+ * Error case: Partial missing device is not allowed even on non-DEGRADED
+ * mount.
+ */
+ {
+ .description = "RAID0: fail: missing device on DEGRADED",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_MISSING_DEV,
+ },
+ .degraded = true,
+ .expected_result = -EIO,
+ },
+
+ /*
+ * One sequential zone and one conventional zone, having matching
+ * last_alloc.
+ */
+ {
+ .description = "RAID0: seq zone and conv zone, partially written stripe",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_2M - SZ_4K,
+ .expected_alloc_offset = SZ_2M - SZ_4K,
+ },
+ {
+ .description = "RAID0: conv zone and seq zone, partially written stripe",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ WP_CONVENTIONAL, SZ_1M,
+ },
+ .last_alloc = SZ_2M + SZ_4K,
+ .expected_alloc_offset = SZ_2M + SZ_4K,
+ },
+ /*
+ * Error case: one sequential and one conventional zone, but having larger
+ * last_alloc than write pointer.
+ */
+ {
+ .description = "RAID0: fail: seq zone and conv zone, larger last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 2,
+ .alloc_offsets = {
+ SZ_1M, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_2M + BTRFS_STRIPE_LEN * 2,
+ .expected_result = -EIO,
+ },
+
+ /* RAID0, 4 stripes with seq zones and conv zones. */
+ {
+ .description = "RAID0: stripes [2, 2, ?, ?] last_alloc = 6",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 6,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 6,
+ },
+ {
+ .description = "RAID0: stripes [2, 2, ?, ?] last_alloc = 7.5",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 7 + HALF_STRIPE_LEN,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 7 + HALF_STRIPE_LEN,
+ },
+ {
+ .description = "RAID0: stripes [3, ?, ?, ?] last_alloc = 1",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 3, WP_CONVENTIONAL,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 9,
+ },
+ {
+ .description = "RAID0: stripes [2, ?, 1, ?] last_alloc = 5",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, WP_CONVENTIONAL,
+ BTRFS_STRIPE_LEN, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 5,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 5,
+ },
+ {
+ .description = "RAID0: fail: stripes [2, ?, 1, ?] last_alloc = 7",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID0,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, WP_CONVENTIONAL,
+ BTRFS_STRIPE_LEN, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 7,
+ .expected_result = -EIO,
+ },
+
+ /* RAID10 */
+ /* Normal case */
+ {
+ .description = "RAID10: initial partial write",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ HALF_STRIPE_LEN, HALF_STRIPE_LEN, 0, 0,
+ },
+ .expected_alloc_offset = HALF_STRIPE_LEN,
+ },
+ {
+ .description = "RAID10: while in second stripe",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ BTRFS_STRIPE_LEN + HALF_STRIPE_LEN,
+ BTRFS_STRIPE_LEN + HALF_STRIPE_LEN,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ },
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 5 + HALF_STRIPE_LEN,
+ },
+ {
+ .description = "RAID10: one stripe advanced",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ SZ_1M + BTRFS_STRIPE_LEN, SZ_1M + BTRFS_STRIPE_LEN,
+ SZ_1M, SZ_1M,
+ },
+ .expected_alloc_offset = SZ_2M + BTRFS_STRIPE_LEN,
+ },
+ {
+ .description = "RAID10: one stripe advanced, with conventional zone",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ SZ_1M + BTRFS_STRIPE_LEN, WP_CONVENTIONAL,
+ WP_CONVENTIONAL, SZ_1M,
+ },
+ .expected_alloc_offset = SZ_2M + BTRFS_STRIPE_LEN,
+ },
+ /* Error case: having different write pointers. */
+ {
+ .description = "RAID10: fail: disordered stripes",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ },
+ .expected_result = -EIO,
+ },
+ {
+ .description = "RAID10: fail: far distance",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 3, BTRFS_STRIPE_LEN * 3,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ },
+ .expected_result = -EIO,
+ },
+ {
+ .description = "RAID10: fail: too many partial write",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ HALF_STRIPE_LEN, HALF_STRIPE_LEN,
+ HALF_STRIPE_LEN, HALF_STRIPE_LEN,
+ 0, 0, 0, 0,
+ },
+ .expected_result = -EIO,
+ },
+ /*
+ * Error case: Partial missing device in RAID0 level is not allowed even on
+ * non-DEGRADED mount.
+ */
+ {
+ .description = "RAID10: fail: missing device on DEGRADED",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ SZ_1M, SZ_1M,
+ WP_MISSING_DEV, WP_MISSING_DEV,
+ },
+ .degraded = true,
+ .expected_result = -EIO,
+ },
+
+ /*
+ * One sequential zone and one conventional zone, having matching
+ * last_alloc.
+ */
+ {
+ .description = "RAID10: seq zone and conv zone, partially written stripe",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ SZ_1M, SZ_1M,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_2M - SZ_4K,
+ .expected_alloc_offset = SZ_2M - SZ_4K,
+ },
+ {
+ .description = "RAID10: conv zone and seq zone, partially written stripe",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ SZ_1M, SZ_1M,
+ },
+ .last_alloc = SZ_2M + SZ_4K,
+ .expected_alloc_offset = SZ_2M + SZ_4K,
+ },
+ /*
+ * Error case: one sequential and one conventional zone, but having larger
+ * last_alloc than write pointer.
+ */
+ {
+ .description = "RAID10: fail: seq zone and conv zone, larger last_alloc",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 4,
+ .alloc_offsets = {
+ SZ_1M, SZ_1M,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = SZ_2M + BTRFS_STRIPE_LEN * 2,
+ .expected_result = -EIO,
+ },
+
+ /* RAID10, 4 stripes with seq zones and conv zones. */
+ {
+ .description = "RAID10: stripes [2, 2, ?, ?] last_alloc = 6",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 6,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 6,
+ },
+ {
+ .description = "RAID10: stripes [2, 2, ?, ?] last_alloc = 7.5",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 7 + HALF_STRIPE_LEN,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 7 + HALF_STRIPE_LEN,
+ },
+ {
+ .description = "RAID10: stripes [3, ?, ?, ?] last_alloc = 1",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 3, BTRFS_STRIPE_LEN * 3,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 9,
+ },
+ {
+ .description = "RAID10: stripes [2, ?, 1, ?] last_alloc = 5",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 5,
+ .expected_alloc_offset = BTRFS_STRIPE_LEN * 5,
+ },
+ {
+ .description = "RAID10: fail: stripes [2, ?, 1, ?] last_alloc = 7",
+ .raid_type = BTRFS_BLOCK_GROUP_RAID10,
+ .num_stripes = 8,
+ .alloc_offsets = {
+ BTRFS_STRIPE_LEN * 2, BTRFS_STRIPE_LEN * 2,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ BTRFS_STRIPE_LEN, BTRFS_STRIPE_LEN,
+ WP_CONVENTIONAL, WP_CONVENTIONAL,
+ },
+ .last_alloc = BTRFS_STRIPE_LEN * 7,
+ .expected_result = -EIO,
+ },
+};
+
+int btrfs_test_zoned(void)
+{
+ struct btrfs_fs_info *fs_info __free(btrfs_free_dummy_fs_info) = NULL;
+ int ret;
+
+ test_msg("running zoned tests. error messages are expected.");
+
+ fs_info = btrfs_alloc_dummy_fs_info(PAGE_SIZE, PAGE_SIZE);
+ if (!fs_info) {
+ test_std_err(TEST_ALLOC_FS_INFO);
+ return -ENOMEM;
+ }
+
+ for (int i = 0; i < ARRAY_SIZE(load_zone_info_tests); i++) {
+ ret = test_load_zone_info(fs_info, &load_zone_info_tests[i]);
+ if (ret) {
+ test_err("test case \"%s\" failed",
+ load_zone_info_tests[i].description);
+ return ret;
+ }
+ }
+
+ return 0;
+}
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 052d6988ab8c..75351234eb36 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -2370,6 +2370,9 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
if (!btrfs_is_zoned(block_group->fs_info))
return true;
+ if (unlikely(btrfs_is_testing(fs_info)))
+ return true;
+
map = block_group->physical_map;
spin_lock(&fs_info->zone_active_bgs_lock);
--
2.52.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v2 4/4] btrfs: tests: zoned: add selftest for zoned code
2026-01-26 5:49 ` [PATCH v2 4/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
@ 2026-01-26 9:36 ` Johannes Thumshirn
2026-01-28 4:57 ` Naohiro Aota
2026-02-03 6:27 ` David Sterba
1 sibling, 1 reply; 9+ messages in thread
From: Johannes Thumshirn @ 2026-01-26 9:36 UTC (permalink / raw)
To: Naohiro Aota, linux-btrfs@vger.kernel.org
On 1/26/26 6:51 AM, Naohiro Aota wrote:
> + * Copyright (C) 2015 Facebook. All rights reserved.
Copyright (C) 2026 Western Digital. All rights reserved.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 3/4] btrfs: zoned: factor out the zone loading part into a testable function
2026-01-26 5:49 ` [PATCH v2 3/4] btrfs: zoned: factor out the zone loading part into a testable function Naohiro Aota
@ 2026-01-26 9:51 ` Johannes Thumshirn
0 siblings, 0 replies; 9+ messages in thread
From: Johannes Thumshirn @ 2026-01-26 9:51 UTC (permalink / raw)
To: Naohiro Aota, linux-btrfs@vger.kernel.org
Looks good,
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 4/4] btrfs: tests: zoned: add selftest for zoned code
2026-01-26 9:36 ` Johannes Thumshirn
@ 2026-01-28 4:57 ` Naohiro Aota
0 siblings, 0 replies; 9+ messages in thread
From: Naohiro Aota @ 2026-01-28 4:57 UTC (permalink / raw)
To: Johannes Thumshirn, Naohiro Aota, linux-btrfs@vger.kernel.org
On Mon Jan 26, 2026 at 6:36 PM JST, Johannes Thumshirn wrote:
> On 1/26/26 6:51 AM, Naohiro Aota wrote:
>> + * Copyright (C) 2015 Facebook. All rights reserved.
>
> Copyright (C) 2026 Western Digital. All rights reserved.
Ouch, I forgot to update this line. Thanks.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 4/4] btrfs: tests: zoned: add selftest for zoned code
2026-01-26 5:49 ` [PATCH v2 4/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
2026-01-26 9:36 ` Johannes Thumshirn
@ 2026-02-03 6:27 ` David Sterba
1 sibling, 0 replies; 9+ messages in thread
From: David Sterba @ 2026-02-03 6:27 UTC (permalink / raw)
To: Naohiro Aota; +Cc: linux-btrfs
On Mon, Jan 26, 2026 at 02:49:53PM +0900, Naohiro Aota wrote:
> Add a test function for the zoned code, for now it tests
> btrfs_load_block_group_by_raid_type() with various test cases. The
> load_zone_info_tests[] array defines the test cases.
>
> Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
Thies patch had some conflicts in for-next and I also accidentally did
not add it to misc-next. Still I'd like to add it to the 6.20/7.0 queue.
It's too late for the first batch, please refresh the patch and resend
on top of for-next, I'll send it in 2nd pull request some time during
the merge window. Thanks.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-02-03 6:27 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-26 5:49 [PATCH v2 0/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
2026-01-26 5:49 ` [PATCH v2 1/4] btrfs: tests: add cleanup functions for test specific functions Naohiro Aota
2026-01-26 5:49 ` [PATCH v2 2/4] btrfs: add cleanup function for btrfs_free_chunk_map Naohiro Aota
2026-01-26 5:49 ` [PATCH v2 3/4] btrfs: zoned: factor out the zone loading part into a testable function Naohiro Aota
2026-01-26 9:51 ` Johannes Thumshirn
2026-01-26 5:49 ` [PATCH v2 4/4] btrfs: tests: zoned: add selftest for zoned code Naohiro Aota
2026-01-26 9:36 ` Johannes Thumshirn
2026-01-28 4:57 ` Naohiro Aota
2026-02-03 6:27 ` David Sterba
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox