public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5
@ 2026-04-19  3:09 Yu Kuai
  2026-04-19  3:09 ` [PATCH] md: add exact bitmap mapping and reshape hooks Yu Kuai
                   ` (18 more replies)
  0 siblings, 19 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

llbitmap currently tracks one live bitmap geometry, but reshape needs old and
new mapping to coexist until reshape completion. That breaks normal bitmap
accounting in three places:

- bios that cross reshape_position cannot be accounted as one bitmap range
- checkpointed dirty state must move forward as reshape advances
- interrupted reshape must resume with the same llbitmap interpretation after
  reassembly

The main pieces are:

- add bitmap-side range preparation so md core uses one bitmap API
- split reshape-crossing bios in raid10/raid5 before bitmap accounting
- let llbitmap stage target geometry and remap checkpointed bits in one bitmap
- teach raid10/raid5 to provide exact old/new mapping and drive llbitmap
  reshape progress

Validation

Runtime:
- custom QEMU llbitmap reshape matrix passed for raid10 and raid5:
  - expanded/changed bit-state checks with query_bit
  - cross-boundary writes during reshape
  - heavier raid5 write path that previously warned/crashed
  - degraded stop/assemble data verification after reshape
  - interrupted reshape stop/assemble/continue for raid10 and raid5
- mdadm test 02r5grow passed in disk mode

The temporary query_bit validation hook is intentionally kept out of this
submission series.

Yu Kuai (19):
  md: add exact bitmap mapping and reshape hooks
  md: add helper to split bios at reshape offset
  md/md-llbitmap: track bitmap sync_size explicitly
  md/md-llbitmap: allocate page controls independently
  md/md-llbitmap: grow the page cache in place for reshape
  md/md-llbitmap: track target reshape geometry fields
  md/md-llbitmap: finish reshape geometry
  md/md-llbitmap: refuse reshape while llbitmap still needs sync
  md/md-llbitmap: add reshape range mapping helpers
  md/md-llbitmap: don't skip reshape ranges from bitmap state
  md/md-llbitmap: remap checkpointed bits as reshape progresses
  md/md-llbitmap: clamp state-machine walks to tracked bits
  md/raid10: reject llbitmap chunk shrink during reshape
  md/raid10: wire llbitmap reshape lifecycle
  md/raid10: split reshape bios before bitmap accounting
  md/raid5: add exact old and new llbitmap mapping helpers
  md/raid5: reject llbitmap chunk shrink during reshape
  md/raid5: wire llbitmap reshape lifecycle
  md/raid5: split reshape bios before bitmap accounting

 drivers/md/md-bitmap.c   |   8 +
 drivers/md/md-bitmap.h   |   7 +
 drivers/md/md-llbitmap.c | 602 +++++++++++++++++++++++++++++++++++---
 drivers/md/md.c          |  51 +++
 drivers/md/md.h          |   8 +
 drivers/md/raid10.c      |  44 ++-
 drivers/md/raid5.c       | 114 ++++++--
 7 files changed, 760 insertions(+), 74 deletions(-)

-- 
2.51.0

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH] md: add exact bitmap mapping and reshape hooks
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md: add helper to split bios at reshape offset Yu Kuai
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Add bitmap mapping and reshape hooks needed by llbitmap reshape
support without teaching md core to account a single bio against
multiple bitmap ranges.

This also adds the old/new bitmap geometry helpers used by
personalities to describe reshape mapping to llbitmap.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/md-bitmap.c   |  8 ++++++++
 drivers/md/md-bitmap.h   |  8 ++++++++
 drivers/md/md-llbitmap.c |  8 ++++++++
 drivers/md/md.c          | 12 ++++++++----
 drivers/md/md.h          |  4 ++++
 5 files changed, 36 insertions(+), 4 deletions(-)

diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
index 83378c033c72..69b3f5fc6bc0 100644
--- a/drivers/md/md-bitmap.c
+++ b/drivers/md/md-bitmap.c
@@ -1728,6 +1728,13 @@ static void bitmap_start_write(struct mddev *mddev, sector_t offset,
 	}
 }
 
+static void bitmap_prepare_range(struct mddev *mddev, sector_t *offset,
+				 unsigned long *sectors)
+{
+	if (mddev->pers->bitmap_sector)
+		mddev->pers->bitmap_sector(mddev, offset, sectors);
+}
+
 static void bitmap_end_write(struct mddev *mddev, sector_t offset,
 			     unsigned long sectors)
 {
@@ -2987,6 +2994,7 @@ static struct bitmap_operations bitmap_ops = {
 	.flush			= bitmap_flush,
 	.write_all		= bitmap_write_all,
 	.dirty_bits		= bitmap_dirty_bits,
+	.prepare_range		= bitmap_prepare_range,
 	.unplug			= bitmap_unplug,
 	.daemon_work		= bitmap_daemon_work,
 
diff --git a/drivers/md/md-bitmap.h b/drivers/md/md-bitmap.h
index b42a28fa83a0..8edc7218a1f4 100644
--- a/drivers/md/md-bitmap.h
+++ b/drivers/md/md-bitmap.h
@@ -93,6 +93,14 @@ struct bitmap_operations {
 	void (*write_all)(struct mddev *mddev);
 	void (*dirty_bits)(struct mddev *mddev, unsigned long s,
 			   unsigned long e);
+	/* Prepare a range for this bitmap implementation. */
+	void (*prepare_range)(struct mddev *mddev,
+			      sector_t *offset,
+			      unsigned long *sectors);
+	void (*reshape_finish)(struct mddev *mddev);
+	int (*reshape_can_start)(struct mddev *mddev);
+	void (*reshape_mark)(struct mddev *mddev, sector_t old_pos,
+			     sector_t new_pos);
 	void (*unplug)(struct mddev *mddev, bool sync);
 	void (*daemon_work)(struct mddev *mddev);
 
diff --git a/drivers/md/md-llbitmap.c b/drivers/md/md-llbitmap.c
index cdfecaca216b..62b8f3efa9f5 100644
--- a/drivers/md/md-llbitmap.c
+++ b/drivers/md/md-llbitmap.c
@@ -1061,6 +1061,13 @@ static void llbitmap_destroy(struct mddev *mddev)
 	mutex_unlock(&mddev->bitmap_info.mutex);
 }
 
+static void llbitmap_prepare_range(struct mddev *mddev, sector_t *offset,
+				   unsigned long *sectors)
+{
+	if (mddev->pers->bitmap_sector)
+		mddev->pers->bitmap_sector(mddev, offset, sectors);
+}
+
 static void llbitmap_start_write(struct mddev *mddev, sector_t offset,
 				 unsigned long sectors)
 {
@@ -1596,6 +1603,7 @@ static struct bitmap_operations llbitmap_ops = {
 	.update_sb		= llbitmap_update_sb,
 	.get_stats		= llbitmap_get_stats,
 	.dirty_bits		= llbitmap_dirty_bits,
+	.prepare_range		= llbitmap_prepare_range,
 	.write_all		= llbitmap_write_all,
 
 	.group			= &md_llbitmap_group,
diff --git a/drivers/md/md.c b/drivers/md/md.c
index c9597c6bd341..5b001e3e4cec 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -9301,6 +9301,12 @@ void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev,
 }
 EXPORT_SYMBOL_GPL(md_submit_discard_bio);
 
+static void md_bitmap_prepare_range(struct mddev *mddev, sector_t *offset,
+				    unsigned long *sectors)
+{
+	mddev->bitmap_ops->prepare_range(mddev, offset, sectors);
+}
+
 static void md_bitmap_start(struct mddev *mddev,
 			    struct md_io_clone *md_io_clone)
 {
@@ -9308,10 +9314,8 @@ static void md_bitmap_start(struct mddev *mddev,
 			   mddev->bitmap_ops->start_discard :
 			   mddev->bitmap_ops->start_write;
 
-	if (mddev->pers->bitmap_sector)
-		mddev->pers->bitmap_sector(mddev, &md_io_clone->offset,
-					   &md_io_clone->sectors);
-
+	md_bitmap_prepare_range(mddev, &md_io_clone->offset,
+				&md_io_clone->sectors);
 	fn(mddev, md_io_clone->offset, md_io_clone->sectors);
 }
 
diff --git a/drivers/md/md.h b/drivers/md/md.h
index ac84289664cd..82a36bdaeaaa 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -793,6 +793,10 @@ struct md_personality
 	/* convert io ranges from array to bitmap */
 	void (*bitmap_sector)(struct mddev *mddev, sector_t *offset,
 			      unsigned long *sectors);
+	void (*bitmap_sector_map)(struct mddev *mddev, sector_t *offset,
+				  unsigned long *sectors, bool previous);
+	sector_t (*bitmap_sync_size)(struct mddev *mddev, bool previous);
+	sector_t (*bitmap_array_sectors)(struct mddev *mddev, bool previous);
 };
 
 struct md_sysfs_entry {
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md: add helper to split bios at reshape offset
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
  2026-04-19  3:09 ` [PATCH] md: add exact bitmap mapping and reshape hooks Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: track bitmap sync_size explicitly Yu Kuai
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Add mddev_bio_split_at_reshape_offset() so personalities can share
reshape-offset bio splitting instead of open-coding the same helper in
multiple places.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/md.c | 39 +++++++++++++++++++++++++++++++++++++++
 drivers/md/md.h |  4 ++++
 2 files changed, 43 insertions(+)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 5b001e3e4cec..d222e365141b 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -9301,6 +9301,45 @@ void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev,
 }
 EXPORT_SYMBOL_GPL(md_submit_discard_bio);
 
+struct bio *mddev_bio_split_at_reshape_offset(struct mddev *mddev,
+					      struct bio *bio,
+					      unsigned int *max_sectors,
+					      struct bio_set *bs)
+{
+	sector_t boundary;
+	sector_t start;
+	sector_t end;
+	unsigned int split_sectors;
+
+	split_sectors = bio_sectors(bio);
+	if (max_sectors && *max_sectors && *max_sectors < split_sectors)
+		split_sectors = *max_sectors;
+
+	if (!test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery))
+		goto split;
+
+	boundary = mddev->reshape_position;
+	start = bio->bi_iter.bi_sector;
+	end = bio_end_sector(bio);
+	if (start >= boundary || end <= boundary)
+		goto split;
+
+	if (boundary - start < split_sectors)
+		split_sectors = boundary - start;
+
+split:
+	if (max_sectors)
+		*max_sectors = split_sectors;
+	if (split_sectors < bio_sectors(bio)) {
+		bio = bio_submit_split_bioset(bio, split_sectors, bs);
+		if (bio)
+			bio->bi_opf |= REQ_NOMERGE;
+	}
+
+	return bio;
+}
+EXPORT_SYMBOL_GPL(mddev_bio_split_at_reshape_offset);
+
 static void md_bitmap_prepare_range(struct mddev *mddev, sector_t *offset,
 				    unsigned long *sectors)
 {
diff --git a/drivers/md/md.h b/drivers/md/md.h
index 82a36bdaeaaa..7247eeca5aa4 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -920,6 +920,10 @@ extern void md_error(struct mddev *mddev, struct md_rdev *rdev);
 extern void md_finish_reshape(struct mddev *mddev);
 void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev,
 			struct bio *bio, sector_t start, sector_t size);
+struct bio *mddev_bio_split_at_reshape_offset(struct mddev *mddev,
+					      struct bio *bio,
+					      unsigned int *max_sectors,
+					      struct bio_set *bs);
 void md_account_bio(struct mddev *mddev, struct bio **bio);
 void md_free_cloned_bio(struct bio *bio);
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/md-llbitmap: track bitmap sync_size explicitly
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
  2026-04-19  3:09 ` [PATCH] md: add exact bitmap mapping and reshape hooks Yu Kuai
  2026-04-19  3:09 ` [PATCH] md: add helper to split bios at reshape offset Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: allocate page controls independently Yu Kuai
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Track llbitmap's own sync_size instead of always using
mddev->resync_max_sectors directly.

This is the minimal bookkeeping needed before llbitmap can track old
and new reshape geometry independently.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/md-llbitmap.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/md/md-llbitmap.c b/drivers/md/md-llbitmap.c
index 62b8f3efa9f5..547b1317df43 100644
--- a/drivers/md/md-llbitmap.c
+++ b/drivers/md/md-llbitmap.c
@@ -267,6 +267,8 @@ struct llbitmap {
 	unsigned long chunksize;
 	/* total number of chunks */
 	unsigned long chunks;
+	/* total number of sectors tracked by current bitmap geometry */
+	sector_t sync_size;
 	unsigned long last_end_sync;
 	/*
 	 * time in seconds that dirty bits will be cleared if the page is not
@@ -791,6 +793,7 @@ static int llbitmap_init(struct llbitmap *llbitmap)
 	llbitmap->chunkshift = ffz(~chunksize);
 	llbitmap->chunksize = chunksize;
 	llbitmap->chunks = chunks;
+	llbitmap->sync_size = blocks;
 	mddev->bitmap_info.daemon_sleep = DEFAULT_DAEMON_SLEEP;
 
 	ret = llbitmap_cache_pages(llbitmap);
@@ -811,6 +814,7 @@ static int llbitmap_read_sb(struct llbitmap *llbitmap)
 	unsigned long daemon_sleep;
 	unsigned long chunksize;
 	unsigned long events;
+	sector_t sync_size;
 	struct page *sb_page;
 	bitmap_super_t *sb;
 	int ret = -EINVAL;
@@ -860,6 +864,9 @@ static int llbitmap_read_sb(struct llbitmap *llbitmap)
 		goto out_put_page;
 	}
 
+	sync_size = le64_to_cpu(sb->sync_size);
+	if (!sync_size)
+		sync_size = mddev->resync_max_sectors;
 	chunksize = le32_to_cpu(sb->chunksize);
 	if (!is_power_of_2(chunksize)) {
 		pr_err("md/llbitmap: %s: chunksize not a power of 2",
@@ -895,8 +902,9 @@ static int llbitmap_read_sb(struct llbitmap *llbitmap)
 
 	llbitmap->barrier_idle = DEFAULT_BARRIER_IDLE;
 	llbitmap->chunksize = chunksize;
-	llbitmap->chunks = DIV_ROUND_UP_SECTOR_T(mddev->resync_max_sectors, chunksize);
+	llbitmap->chunks = DIV_ROUND_UP_SECTOR_T(sync_size, chunksize);
 	llbitmap->chunkshift = ffz(~chunksize);
+	llbitmap->sync_size = sync_size;
 	ret = llbitmap_cache_pages(llbitmap);
 
 out_put_page:
@@ -1026,6 +1034,7 @@ static int llbitmap_resize(struct mddev *mddev, sector_t blocks, int chunksize)
 	llbitmap->chunkshift = ffz(~chunksize);
 	llbitmap->chunksize = chunksize;
 	llbitmap->chunks = chunks;
+	llbitmap->sync_size = blocks;
 
 	return 0;
 }
@@ -1384,7 +1393,7 @@ static void llbitmap_update_sb(void *data)
 	sb->events = cpu_to_le64(mddev->events);
 	sb->state = cpu_to_le32(llbitmap->flags);
 	sb->chunksize = cpu_to_le32(llbitmap->chunksize);
-	sb->sync_size = cpu_to_le64(mddev->resync_max_sectors);
+	sb->sync_size = cpu_to_le64(llbitmap->sync_size);
 	sb->events_cleared = cpu_to_le64(llbitmap->events_cleared);
 	sb->sectors_reserved = cpu_to_le32(mddev->bitmap_info.space);
 	sb->daemon_sleep = cpu_to_le32(mddev->bitmap_info.daemon_sleep);
@@ -1402,6 +1411,7 @@ static int llbitmap_get_stats(void *data, struct md_bitmap_stats *stats)
 	stats->missing_pages = 0;
 	stats->pages = llbitmap->nr_pages;
 	stats->file_pages = llbitmap->nr_pages;
+	stats->sync_size = llbitmap->sync_size;
 
 	stats->behind_writes = atomic_read(&llbitmap->behind_writes);
 	stats->behind_wait = wq_has_sleeper(&llbitmap->behind_wait);
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/md-llbitmap: allocate page controls independently
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (2 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: track bitmap sync_size explicitly Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: grow the page cache in place for reshape Yu Kuai
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Allocate one llbitmap page-control object at a time and free each
object through the same model.

Let llbitmap_read_page() return a zeroed page without reading disk when
the page index is beyond the current bitmap size, so page-control
allocation no longer needs a separate read_existing flag.

This keeps the llbitmap page-control lifetime self-consistent and
prepares the page-cache code for later in-place growth.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/md-llbitmap.c | 99 +++++++++++++++++++++++++---------------
 1 file changed, 62 insertions(+), 37 deletions(-)

diff --git a/drivers/md/md-llbitmap.c b/drivers/md/md-llbitmap.c
index 547b1317df43..a411041fd122 100644
--- a/drivers/md/md-llbitmap.c
+++ b/drivers/md/md-llbitmap.c
@@ -443,13 +443,19 @@ static void llbitmap_write(struct llbitmap *llbitmap, enum llbitmap_state state,
 		llbitmap_set_page_dirty(llbitmap, idx, bit);
 }
 
+static unsigned int llbitmap_used_pages(struct llbitmap *llbitmap,
+					unsigned long chunks)
+{
+	return DIV_ROUND_UP(chunks + BITMAP_DATA_OFFSET, PAGE_SIZE);
+}
+
 static struct page *llbitmap_read_page(struct llbitmap *llbitmap, int idx)
 {
 	struct mddev *mddev = llbitmap->mddev;
 	struct page *page = NULL;
 	struct md_rdev *rdev;
 
-	if (llbitmap->pctl && llbitmap->pctl[idx])
+	if (llbitmap->pctl && idx < llbitmap->nr_pages && llbitmap->pctl[idx])
 		page = llbitmap->pctl[idx]->page;
 	if (page)
 		return page;
@@ -457,6 +463,8 @@ static struct page *llbitmap_read_page(struct llbitmap *llbitmap, int idx)
 	page = alloc_page(GFP_KERNEL | __GFP_ZERO);
 	if (!page)
 		return ERR_PTR(-ENOMEM);
+	if (idx >= llbitmap_used_pages(llbitmap, llbitmap->chunks))
+		return page;
 
 	rdev_for_each(rdev, mddev) {
 		sector_t sector;
@@ -527,61 +535,78 @@ static void llbitmap_free_pages(struct llbitmap *llbitmap)
 	for (i = 0; i < llbitmap->nr_pages; i++) {
 		struct llbitmap_page_ctl *pctl = llbitmap->pctl[i];
 
-		if (!pctl || !pctl->page)
-			break;
-
-		__free_page(pctl->page);
+		if (!pctl)
+			continue;
+		if (pctl->page)
+			__free_page(pctl->page);
 		percpu_ref_exit(&pctl->active);
+		kfree(pctl);
 	}
 
-	kfree(llbitmap->pctl[0]);
 	kfree(llbitmap->pctl);
 	llbitmap->pctl = NULL;
 }
 
-static int llbitmap_cache_pages(struct llbitmap *llbitmap)
+static struct llbitmap_page_ctl *
+llbitmap_alloc_page_ctl(struct llbitmap *llbitmap, int idx)
 {
 	struct llbitmap_page_ctl *pctl;
-	unsigned int nr_pages = DIV_ROUND_UP(llbitmap->chunks +
-					     BITMAP_DATA_OFFSET, PAGE_SIZE);
+	struct page *page;
 	unsigned int size = struct_size(pctl, dirty, BITS_TO_LONGS(
 						llbitmap->blocks_per_page));
-	int i;
-
-	llbitmap->pctl = kmalloc_array(nr_pages, sizeof(void *),
-				       GFP_KERNEL | __GFP_ZERO);
-	if (!llbitmap->pctl)
-		return -ENOMEM;
 
 	size = round_up(size, cache_line_size());
-	pctl = kmalloc_array(nr_pages, size, GFP_KERNEL | __GFP_ZERO);
-	if (!pctl) {
-		kfree(llbitmap->pctl);
-		return -ENOMEM;
+	pctl = kzalloc(size, GFP_KERNEL);
+	if (!pctl)
+		return ERR_PTR(-ENOMEM);
+
+	page = llbitmap_read_page(llbitmap, idx);
+
+	if (IS_ERR(page)) {
+		kfree(pctl);
+		return ERR_CAST(page);
 	}
 
-	llbitmap->nr_pages = nr_pages;
+	if (percpu_ref_init(&pctl->active, active_release,
+			    PERCPU_REF_ALLOW_REINIT, GFP_KERNEL)) {
+		__free_page(page);
+		kfree(pctl);
+		return ERR_PTR(-ENOMEM);
+	}
 
-	for (i = 0; i < nr_pages; i++, pctl = (void *)pctl + size) {
-		struct page *page = llbitmap_read_page(llbitmap, i);
+	pctl->page = page;
+	pctl->state = page_address(page);
+	init_waitqueue_head(&pctl->wait);
+	return pctl;
+}
 
-		llbitmap->pctl[i] = pctl;
+static unsigned int llbitmap_reserved_pages(struct llbitmap *llbitmap)
+{
+	return DIV_ROUND_UP(llbitmap->mddev->bitmap_info.space << SECTOR_SHIFT,
+			    PAGE_SIZE);
+}
 
-		if (IS_ERR(page)) {
-			llbitmap_free_pages(llbitmap);
-			return PTR_ERR(page);
-		}
+static int llbitmap_alloc_pages(struct llbitmap *llbitmap)
+{
+	unsigned int used_pages = llbitmap_used_pages(llbitmap, llbitmap->chunks);
+	unsigned int nr_pages = max(used_pages, llbitmap_reserved_pages(llbitmap));
+	int i;
+
+	llbitmap->pctl = kcalloc(nr_pages, sizeof(*llbitmap->pctl), GFP_KERNEL);
+	if (!llbitmap->pctl)
+		return -ENOMEM;
 
-		if (percpu_ref_init(&pctl->active, active_release,
-				    PERCPU_REF_ALLOW_REINIT, GFP_KERNEL)) {
-			__free_page(page);
+	llbitmap->nr_pages = nr_pages;
+
+	for (i = 0; i < nr_pages; i++) {
+		llbitmap->pctl[i] = llbitmap_alloc_page_ctl(llbitmap, i);
+		if (IS_ERR(llbitmap->pctl[i])) {
+			int ret = PTR_ERR(llbitmap->pctl[i]);
+
+			llbitmap->pctl[i] = NULL;
 			llbitmap_free_pages(llbitmap);
-			return -ENOMEM;
+			return ret;
 		}
-
-		pctl->page = page;
-		pctl->state = page_address(page);
-		init_waitqueue_head(&pctl->wait);
 	}
 
 	return 0;
@@ -796,7 +821,7 @@ static int llbitmap_init(struct llbitmap *llbitmap)
 	llbitmap->sync_size = blocks;
 	mddev->bitmap_info.daemon_sleep = DEFAULT_DAEMON_SLEEP;
 
-	ret = llbitmap_cache_pages(llbitmap);
+	ret = llbitmap_alloc_pages(llbitmap);
 	if (ret)
 		return ret;
 
@@ -905,7 +930,7 @@ static int llbitmap_read_sb(struct llbitmap *llbitmap)
 	llbitmap->chunks = DIV_ROUND_UP_SECTOR_T(sync_size, chunksize);
 	llbitmap->chunkshift = ffz(~chunksize);
 	llbitmap->sync_size = sync_size;
-	ret = llbitmap_cache_pages(llbitmap);
+	ret = llbitmap_alloc_pages(llbitmap);
 
 out_put_page:
 	__free_page(sb_page);
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/md-llbitmap: grow the page cache in place for reshape
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (3 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: allocate page controls independently Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: track target reshape geometry fields Yu Kuai
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Use the page-control helpers to grow llbitmap's cached pages in place
for resize and later reshape preparation, instead of rebuilding the
whole cache.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/md-llbitmap.c | 139 +++++++++++++++++++++++++++++++++++----
 1 file changed, 127 insertions(+), 12 deletions(-)

diff --git a/drivers/md/md-llbitmap.c b/drivers/md/md-llbitmap.c
index a411041fd122..841b64f0b4e6 100644
--- a/drivers/md/md-llbitmap.c
+++ b/drivers/md/md-llbitmap.c
@@ -350,6 +350,19 @@ static char state_machine[BitStateCount][BitmapActionCount] = {
 };
 
 static void __llbitmap_flush(struct mddev *mddev);
+static void llbitmap_flush(struct mddev *mddev);
+static void llbitmap_update_sb(void *data);
+
+static void llbitmap_resize_chunks(struct mddev *mddev, sector_t blocks,
+				   unsigned long *chunksize,
+				   unsigned long *chunks)
+{
+	*chunks = DIV_ROUND_UP_SECTOR_T(blocks, *chunksize);
+	while (*chunks > mddev->bitmap_info.space << SECTOR_SHIFT) {
+		*chunksize = *chunksize << 1;
+		*chunks = DIV_ROUND_UP_SECTOR_T(blocks, *chunksize);
+	}
+}
 
 static enum llbitmap_state llbitmap_read(struct llbitmap *llbitmap, loff_t pos)
 {
@@ -586,6 +599,48 @@ static unsigned int llbitmap_reserved_pages(struct llbitmap *llbitmap)
 			    PAGE_SIZE);
 }
 
+static int llbitmap_expand_pages(struct llbitmap *llbitmap,
+				 unsigned long chunks)
+{
+	struct llbitmap_page_ctl **pctl;
+	unsigned int old_nr_pages = llbitmap->nr_pages;
+	unsigned int nr_pages = llbitmap_used_pages(llbitmap, chunks);
+	int i;
+	int ret;
+
+	if (nr_pages <= old_nr_pages)
+		return 0;
+
+	pctl = kcalloc(nr_pages, sizeof(*pctl), GFP_KERNEL);
+	if (!pctl)
+		return -ENOMEM;
+
+	if (llbitmap->pctl)
+		memcpy(pctl, llbitmap->pctl,
+		       array_size(old_nr_pages, sizeof(*pctl)));
+
+	for (i = old_nr_pages; i < nr_pages; i++) {
+		pctl[i] = llbitmap_alloc_page_ctl(llbitmap, i);
+		if (IS_ERR(pctl[i]))
+			goto err_alloc_ptr;
+	}
+
+	kfree(llbitmap->pctl);
+	llbitmap->pctl = pctl;
+	llbitmap->nr_pages = nr_pages;
+	return 0;
+
+err_alloc_ptr:
+	ret = PTR_ERR(pctl[i]);
+	for (i--; i >= (int)old_nr_pages; i--) {
+		__free_page(pctl[i]->page);
+		percpu_ref_exit(&pctl[i]->active);
+		kfree(pctl[i]);
+	}
+	kfree(pctl);
+	return ret;
+}
+
 static int llbitmap_alloc_pages(struct llbitmap *llbitmap)
 {
 	unsigned int used_pages = llbitmap_used_pages(llbitmap, llbitmap->chunks);
@@ -612,6 +667,34 @@ static int llbitmap_alloc_pages(struct llbitmap *llbitmap)
 	return 0;
 }
 
+static void llbitmap_mark_range(struct llbitmap *llbitmap,
+				unsigned long start,
+				unsigned long end,
+				enum llbitmap_state state)
+{
+	while (start <= end) {
+		llbitmap_write(llbitmap, state, start);
+		start++;
+	}
+}
+
+static int llbitmap_prepare_resize(struct llbitmap *llbitmap,
+				   unsigned long old_chunks,
+				   unsigned long new_chunks,
+				   unsigned long cache_chunks)
+{
+	int ret;
+
+	llbitmap_flush(llbitmap->mddev);
+	ret = llbitmap_expand_pages(llbitmap, cache_chunks);
+	if (ret)
+		return ret;
+	if (new_chunks > old_chunks)
+		llbitmap_mark_range(llbitmap, old_chunks, new_chunks - 1,
+				    BitUnwritten);
+	return 0;
+}
+
 static void llbitmap_init_state(struct llbitmap *llbitmap)
 {
 	enum llbitmap_state state = BitUnwritten;
@@ -899,10 +982,10 @@ static int llbitmap_read_sb(struct llbitmap *llbitmap)
 		goto out_put_page;
 	}
 
-	if (chunksize < DIV_ROUND_UP_SECTOR_T(mddev->resync_max_sectors,
+	if (chunksize < DIV_ROUND_UP_SECTOR_T(sync_size,
 					      mddev->bitmap_info.space << SECTOR_SHIFT)) {
 		pr_err("md/llbitmap: %s: chunksize too small %lu < %llu / %lu",
-		       mdname(mddev), chunksize, mddev->resync_max_sectors,
+		       mdname(mddev), chunksize, sync_size,
 		       mddev->bitmap_info.space);
 		goto out_put_page;
 	}
@@ -1044,24 +1127,56 @@ static int llbitmap_create(struct mddev *mddev)
 static int llbitmap_resize(struct mddev *mddev, sector_t blocks, int chunksize)
 {
 	struct llbitmap *llbitmap = mddev->bitmap;
+	sector_t old_blocks = llbitmap->sync_size;
+	unsigned long old_chunks = llbitmap->chunks;
 	unsigned long chunks;
+	unsigned long cache_chunks;
+	int ret = 0;
+	unsigned long bitmap_chunksize;
+	bool reshape;
 
 	if (chunksize == 0)
 		chunksize = llbitmap->chunksize;
 
-	/* If there is enough space, leave the chunksize unchanged. */
-	chunks = DIV_ROUND_UP_SECTOR_T(blocks, chunksize);
-	while (chunks > mddev->bitmap_info.space << SECTOR_SHIFT) {
-		chunksize = chunksize << 1;
-		chunks = DIV_ROUND_UP_SECTOR_T(blocks, chunksize);
-	}
+	bitmap_chunksize = chunksize;
+	llbitmap_resize_chunks(mddev, blocks, &bitmap_chunksize, &chunks);
 
-	llbitmap->chunkshift = ffz(~chunksize);
-	llbitmap->chunksize = chunksize;
-	llbitmap->chunks = chunks;
-	llbitmap->sync_size = blocks;
+	reshape = mddev->delta_disks || mddev->new_level != mddev->level ||
+		mddev->new_layout != mddev->layout ||
+		mddev->new_chunk_sectors != mddev->chunk_sectors;
+	if (!reshape && bitmap_chunksize != llbitmap->chunksize)
+		return -EOPNOTSUPP;
+	if (blocks == old_blocks && chunks == llbitmap->chunks)
+		return 0;
+
+	mutex_lock(&mddev->bitmap_info.mutex);
 
+	cache_chunks = reshape ? max(old_chunks, chunks) : chunks;
+	ret = llbitmap_prepare_resize(llbitmap, old_chunks, chunks, cache_chunks);
+	if (ret)
+		goto out;
+
+	if (reshape) {
+		llbitmap->reshape_sync_size = blocks;
+		llbitmap->reshape_chunksize = bitmap_chunksize;
+		llbitmap->reshape_chunks = chunks;
+		llbitmap->chunks = max(old_chunks, chunks);
+	} else {
+		if (blocks < old_blocks && chunks < old_chunks)
+			llbitmap_mark_range(llbitmap, chunks, old_chunks - 1,
+					    BitUnwritten);
+		mddev->bitmap_info.chunksize = bitmap_chunksize;
+		llbitmap->chunks = chunks;
+		llbitmap->sync_size = blocks;
+		llbitmap_update_sb(llbitmap);
+	}
+	__llbitmap_flush(mddev);
+	mutex_unlock(&mddev->bitmap_info.mutex);
 	return 0;
+
+out:
+	mutex_unlock(&mddev->bitmap_info.mutex);
+	return ret;
 }
 
 static int llbitmap_load(struct mddev *mddev)
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/md-llbitmap: track target reshape geometry fields
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (4 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: grow the page cache in place for reshape Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: finish reshape geometry Yu Kuai
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Track llbitmap bookkeeping for the target reshape geometry while keeping
a single live bitmap instance.

Add the reshape geometry fields, refresh helper, and update the load and
resize paths to keep the target geometry in sync.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/md-llbitmap.c | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/drivers/md/md-llbitmap.c b/drivers/md/md-llbitmap.c
index 841b64f0b4e6..cfd97022b283 100644
--- a/drivers/md/md-llbitmap.c
+++ b/drivers/md/md-llbitmap.c
@@ -269,6 +269,9 @@ struct llbitmap {
 	unsigned long chunks;
 	/* total number of sectors tracked by current bitmap geometry */
 	sector_t sync_size;
+	unsigned long reshape_chunksize;
+	unsigned long reshape_chunks;
+	sector_t reshape_sync_size;
 	unsigned long last_end_sync;
 	/*
 	 * time in seconds that dirty bits will be cleared if the page is not
@@ -364,6 +367,38 @@ static void llbitmap_resize_chunks(struct mddev *mddev, sector_t blocks,
 	}
 }
 
+static bool llbitmap_reshaping(struct llbitmap *llbitmap)
+{
+	return llbitmap->mddev->reshape_position != MaxSector;
+}
+
+static sector_t llbitmap_personality_sync_size(struct llbitmap *llbitmap,
+					       bool previous)
+{
+	struct mddev *mddev = llbitmap->mddev;
+
+	if (!llbitmap_reshaping(llbitmap) || !mddev->private || !mddev->pers ||
+	    !mddev->pers->bitmap_sync_size)
+		return llbitmap->sync_size;
+	return mddev->pers->bitmap_sync_size(mddev, previous);
+}
+
+static void llbitmap_refresh_reshape(struct llbitmap *llbitmap)
+{
+	unsigned long old_chunks = DIV_ROUND_UP_SECTOR_T(llbitmap->sync_size,
+						 llbitmap->chunksize);
+	sector_t blocks = llbitmap_personality_sync_size(llbitmap, false);
+	unsigned long chunksize = llbitmap->chunksize;
+	unsigned long chunks = DIV_ROUND_UP_SECTOR_T(blocks, chunksize);
+
+	llbitmap->reshape_sync_size = blocks;
+	llbitmap->reshape_chunksize = chunksize;
+	llbitmap->reshape_chunks = chunks;
+	llbitmap_resize_chunks(llbitmap->mddev, blocks, &llbitmap->reshape_chunksize,
+			       &llbitmap->reshape_chunks);
+	llbitmap->chunks = max(old_chunks, llbitmap->reshape_chunks);
+}
+
 static enum llbitmap_state llbitmap_read(struct llbitmap *llbitmap, loff_t pos)
 {
 	unsigned int idx;
@@ -902,6 +937,7 @@ static int llbitmap_init(struct llbitmap *llbitmap)
 	llbitmap->chunksize = chunksize;
 	llbitmap->chunks = chunks;
 	llbitmap->sync_size = blocks;
+	llbitmap_refresh_reshape(llbitmap);
 	mddev->bitmap_info.daemon_sleep = DEFAULT_DAEMON_SLEEP;
 
 	ret = llbitmap_alloc_pages(llbitmap);
@@ -1013,6 +1049,7 @@ static int llbitmap_read_sb(struct llbitmap *llbitmap)
 	llbitmap->chunks = DIV_ROUND_UP_SECTOR_T(sync_size, chunksize);
 	llbitmap->chunkshift = ffz(~chunksize);
 	llbitmap->sync_size = sync_size;
+	llbitmap_refresh_reshape(llbitmap);
 	ret = llbitmap_alloc_pages(llbitmap);
 
 out_put_page:
@@ -1168,6 +1205,7 @@ static int llbitmap_resize(struct mddev *mddev, sector_t blocks, int chunksize)
 		mddev->bitmap_info.chunksize = bitmap_chunksize;
 		llbitmap->chunks = chunks;
 		llbitmap->sync_size = blocks;
+		llbitmap_refresh_reshape(llbitmap);
 		llbitmap_update_sb(llbitmap);
 	}
 	__llbitmap_flush(mddev);
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/md-llbitmap: finish reshape geometry
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (5 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: track target reshape geometry fields Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: refuse reshape while llbitmap still needs sync Yu Kuai
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Commit the staged llbitmap geometry when reshape finishes.\n\nThe reshape staging itself is handled through llbitmap_resize(), so only\nthe finish step remains in this patch.\n\nSigned-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/md-llbitmap.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/drivers/md/md-llbitmap.c b/drivers/md/md-llbitmap.c
index cfd97022b283..17a505ec65d6 100644
--- a/drivers/md/md-llbitmap.c
+++ b/drivers/md/md-llbitmap.c
@@ -1537,6 +1537,30 @@ static void llbitmap_dirty_bits(struct mddev *mddev, unsigned long s,
 	llbitmap_state_machine(mddev->bitmap, s, e, BitmapActionStartwrite);
 }
 
+static void llbitmap_reshape_finish(struct mddev *mddev)
+{
+	struct llbitmap *llbitmap = mddev->bitmap;
+
+	if (mddev->pers->quiesce)
+		mddev->pers->quiesce(mddev, 1);
+
+	mutex_lock(&mddev->bitmap_info.mutex);
+	llbitmap_flush(mddev);
+
+	llbitmap->chunksize = llbitmap->reshape_chunksize;
+	llbitmap->chunkshift = ffz(~llbitmap->chunksize);
+	llbitmap->chunks = llbitmap->reshape_chunks;
+	llbitmap->sync_size = llbitmap->reshape_sync_size;
+	llbitmap_refresh_reshape(llbitmap);
+	mddev->bitmap_info.chunksize = llbitmap->chunksize;
+	llbitmap_update_sb(llbitmap);
+	__llbitmap_flush(mddev);
+	mutex_unlock(&mddev->bitmap_info.mutex);
+
+	if (mddev->pers->quiesce)
+		mddev->pers->quiesce(mddev, 0);
+}
+
 static void llbitmap_write_sb(struct llbitmap *llbitmap)
 {
 	int nr_blocks = DIV_ROUND_UP(BITMAP_DATA_OFFSET, llbitmap->io_size);
@@ -1792,6 +1816,7 @@ static struct bitmap_operations llbitmap_ops = {
 	.get_stats		= llbitmap_get_stats,
 	.dirty_bits		= llbitmap_dirty_bits,
 	.prepare_range		= llbitmap_prepare_range,
+	.reshape_finish		= llbitmap_reshape_finish,
 	.write_all		= llbitmap_write_all,
 
 	.group			= &md_llbitmap_group,
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/md-llbitmap: refuse reshape while llbitmap still needs sync
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (6 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: finish reshape geometry Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: add reshape range mapping helpers Yu Kuai
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Reject reshape when llbitmap still contains NeedSync or Syncing bits.

This keeps reshape from starting until the current llbitmap state has
been reconciled.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/md-llbitmap.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/md/md-llbitmap.c b/drivers/md/md-llbitmap.c
index 17a505ec65d6..88169eeda4b5 100644
--- a/drivers/md/md-llbitmap.c
+++ b/drivers/md/md-llbitmap.c
@@ -1537,6 +1537,29 @@ static void llbitmap_dirty_bits(struct mddev *mddev, unsigned long s,
 	llbitmap_state_machine(mddev->bitmap, s, e, BitmapActionStartwrite);
 }
 
+static int llbitmap_reshape_can_start(struct mddev *mddev)
+{
+	struct llbitmap *llbitmap = mddev->bitmap;
+	unsigned long chunk;
+	int ret = 0;
+
+	if (!llbitmap)
+		return 0;
+
+	mutex_lock(&mddev->bitmap_info.mutex);
+	for (chunk = 0; chunk < llbitmap->chunks; chunk++) {
+		enum llbitmap_state state = llbitmap_read(llbitmap, chunk);
+
+		if (state == BitNeedSync || state == BitSyncing) {
+			ret = -EBUSY;
+			break;
+		}
+	}
+	mutex_unlock(&mddev->bitmap_info.mutex);
+
+	return ret;
+}
+
 static void llbitmap_reshape_finish(struct mddev *mddev)
 {
 	struct llbitmap *llbitmap = mddev->bitmap;
@@ -1817,6 +1840,7 @@ static struct bitmap_operations llbitmap_ops = {
 	.dirty_bits		= llbitmap_dirty_bits,
 	.prepare_range		= llbitmap_prepare_range,
 	.reshape_finish		= llbitmap_reshape_finish,
+	.reshape_can_start	= llbitmap_reshape_can_start,
 	.write_all		= llbitmap_write_all,
 
 	.group			= &md_llbitmap_group,
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/md-llbitmap: add reshape range mapping helpers
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (7 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: refuse reshape while llbitmap still needs sync Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: don't skip reshape ranges from bitmap state Yu Kuai
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Teach llbitmap to choose old versus new geometry during reshape and to
encode exact bitmap ranges for the active geometry.

This is the mapping groundwork for checkpoint remapping.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/md-llbitmap.c | 96 ++++++++++++++++++++++++++++++++++++++--
 1 file changed, 92 insertions(+), 4 deletions(-)

diff --git a/drivers/md/md-llbitmap.c b/drivers/md/md-llbitmap.c
index 88169eeda4b5..fd1dc86afc69 100644
--- a/drivers/md/md-llbitmap.c
+++ b/drivers/md/md-llbitmap.c
@@ -9,6 +9,7 @@
 #include <linux/sched.h>
 #include <linux/list.h>
 #include <linux/file.h>
+#include <linux/math64.h>
 #include <linux/seq_file.h>
 #include <trace/events/block.h>
 
@@ -383,6 +384,16 @@ static sector_t llbitmap_personality_sync_size(struct llbitmap *llbitmap,
 	return mddev->pers->bitmap_sync_size(mddev, previous);
 }
 
+static sector_t llbitmap_logical_size(struct llbitmap *llbitmap, bool previous)
+{
+	struct mddev *mddev = llbitmap->mddev;
+
+	if (!llbitmap_reshaping(llbitmap) || !mddev->private || !mddev->pers ||
+	    !mddev->pers->bitmap_array_sectors)
+		return llbitmap_personality_sync_size(llbitmap, previous);
+	return mddev->pers->bitmap_array_sectors(mddev, previous);
+}
+
 static void llbitmap_refresh_reshape(struct llbitmap *llbitmap)
 {
 	unsigned long old_chunks = DIV_ROUND_UP_SECTOR_T(llbitmap->sync_size,
@@ -399,6 +410,52 @@ static void llbitmap_refresh_reshape(struct llbitmap *llbitmap)
 	llbitmap->chunks = max(old_chunks, llbitmap->reshape_chunks);
 }
 
+static void llbitmap_map_layout(struct llbitmap *llbitmap, sector_t *offset,
+				unsigned long *sectors, bool previous)
+{
+	sector_t limit = llbitmap_logical_size(llbitmap, previous);
+	sector_t start = *offset;
+	sector_t end = start + *sectors;
+
+	if (start >= limit) {
+		*sectors = 0;
+		return;
+	}
+	if (end > limit)
+		end = limit;
+
+	*offset = start;
+	*sectors = end - start;
+	if (!*sectors)
+		return;
+
+	if (llbitmap->mddev->pers->bitmap_sector_map)
+		llbitmap->mddev->pers->bitmap_sector_map(llbitmap->mddev, offset,
+							 sectors, previous);
+	else if (!previous && llbitmap->mddev->pers->bitmap_sector)
+		llbitmap->mddev->pers->bitmap_sector(llbitmap->mddev, offset,
+							 sectors);
+}
+
+static void llbitmap_encode_range(struct llbitmap *llbitmap, sector_t *offset,
+				  unsigned long *sectors, bool previous)
+{
+	unsigned long chunksize = previous ? llbitmap->chunksize :
+				      llbitmap->reshape_chunksize;
+	u64 start;
+	u64 end;
+
+	if (!*sectors) {
+		*offset = 0;
+		return;
+	}
+
+	start = div64_u64(*offset, chunksize);
+	end = div64_u64(*offset + *sectors - 1, chunksize);
+	*offset = (sector_t)start << llbitmap->chunkshift;
+	*sectors = (end - start + 1) << llbitmap->chunkshift;
+}
+
 static enum llbitmap_state llbitmap_read(struct llbitmap *llbitmap, loff_t pos)
 {
 	unsigned int idx;
@@ -1248,11 +1305,32 @@ static void llbitmap_destroy(struct mddev *mddev)
 	mutex_unlock(&mddev->bitmap_info.mutex);
 }
 
+static bool llbitmap_map_previous(struct llbitmap *llbitmap, sector_t offset,
+				  unsigned long sectors)
+{
+	struct mddev *mddev = llbitmap->mddev;
+	sector_t boundary = mddev->reshape_position;
+
+	if (!llbitmap_reshaping(llbitmap))
+		return false;
+
+	WARN_ON_ONCE(sectors && offset < boundary && offset + sectors > boundary);
+
+	return mddev->reshape_backwards ? offset < boundary : offset >= boundary;
+}
+
 static void llbitmap_prepare_range(struct mddev *mddev, sector_t *offset,
 				   unsigned long *sectors)
 {
-	if (mddev->pers->bitmap_sector)
-		mddev->pers->bitmap_sector(mddev, offset, sectors);
+	struct llbitmap *llbitmap = mddev->bitmap;
+	bool previous;
+
+	if (!llbitmap)
+		return;
+
+	previous = llbitmap_map_previous(llbitmap, *offset, *sectors);
+	llbitmap_map_layout(llbitmap, offset, sectors, previous);
+	llbitmap_encode_range(llbitmap, offset, sectors, previous);
 }
 
 static void llbitmap_start_write(struct mddev *mddev, sector_t offset,
@@ -1421,7 +1499,11 @@ static bool llbitmap_blocks_synced(struct mddev *mddev, sector_t offset)
 {
 	struct llbitmap *llbitmap = mddev->bitmap;
 	unsigned long p = offset >> llbitmap->chunkshift;
-	enum llbitmap_state c = llbitmap_read(llbitmap, p);
+	enum llbitmap_state c;
+
+	if (p >= llbitmap->chunks)
+		return false;
+	c = llbitmap_read(llbitmap, p);
 
 	return c == BitClean || c == BitDirty;
 }
@@ -1431,7 +1513,11 @@ static sector_t llbitmap_skip_sync_blocks(struct mddev *mddev, sector_t offset)
 	struct llbitmap *llbitmap = mddev->bitmap;
 	unsigned long p = offset >> llbitmap->chunkshift;
 	int blocks = llbitmap->chunksize - (offset & (llbitmap->chunksize - 1));
-	enum llbitmap_state c = llbitmap_read(llbitmap, p);
+	enum llbitmap_state c;
+
+	if (p >= llbitmap->chunks)
+		return 0;
+	c = llbitmap_read(llbitmap, p);
 
 	/* always skip unwritten blocks */
 	if (c == BitUnwritten)
@@ -1461,6 +1547,8 @@ static bool llbitmap_start_sync(struct mddev *mddev, sector_t offset,
 	 * if md_do_sync() loop more times.
 	 */
 	*blocks = llbitmap->chunksize - (offset & (llbitmap->chunksize - 1));
+	if (p >= llbitmap->chunks)
+		return false;
 	return llbitmap_state_machine(llbitmap, p, p,
 				      BitmapActionStartsync) == BitSyncing;
 }
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/md-llbitmap: don't skip reshape ranges from bitmap state
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (8 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: add reshape range mapping helpers Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: remap checkpointed bits as reshape progresses Yu Kuai
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Reshape progress is tracked by array metadata rather than llbitmap.
Do not let llbitmap skip_sync_blocks() suppress reshape ranges based on
stale bitmap state before the corresponding checkpoint is persisted.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/md-llbitmap.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/md/md-llbitmap.c b/drivers/md/md-llbitmap.c
index fd1dc86afc69..ad1b7a85914b 100644
--- a/drivers/md/md-llbitmap.c
+++ b/drivers/md/md-llbitmap.c
@@ -1519,6 +1519,14 @@ static sector_t llbitmap_skip_sync_blocks(struct mddev *mddev, sector_t offset)
 		return 0;
 	c = llbitmap_read(llbitmap, p);
 
+	/*
+	 * Reshape progress is tracked by array metadata rather than llbitmap.
+	 * Skipping reshape ranges from stale bitmap state can lose data after a
+	 * restart before the corresponding bits are checkpointed to disk.
+	 */
+	if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery))
+		return 0;
+
 	/* always skip unwritten blocks */
 	if (c == BitUnwritten)
 		return blocks;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/md-llbitmap: remap checkpointed bits as reshape progresses
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (9 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: don't skip reshape ranges from bitmap state Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: clamp state-machine walks to tracked bits Yu Kuai
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Merge checkpointed old llbitmap state forward as reshape_position
advances and record the checkpoint remap through reshape_mark().

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/md-llbitmap.c | 172 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 172 insertions(+)

diff --git a/drivers/md/md-llbitmap.c b/drivers/md/md-llbitmap.c
index ad1b7a85914b..b3ff67779557 100644
--- a/drivers/md/md-llbitmap.c
+++ b/drivers/md/md-llbitmap.c
@@ -435,6 +435,14 @@ static void llbitmap_map_layout(struct llbitmap *llbitmap, sector_t *offset,
 	else if (!previous && llbitmap->mddev->pers->bitmap_sector)
 		llbitmap->mddev->pers->bitmap_sector(llbitmap->mddev, offset,
 							 sectors);
+
+	limit = llbitmap_personality_sync_size(llbitmap, previous);
+	start = *offset;
+	end = start + *sectors;
+	if (start >= limit)
+		*sectors = 0;
+	else if (end > limit)
+		*sectors = limit - start;
 }
 
 static void llbitmap_encode_range(struct llbitmap *llbitmap, sector_t *offset,
@@ -787,6 +795,33 @@ static int llbitmap_prepare_resize(struct llbitmap *llbitmap,
 	return 0;
 }
 
+static enum llbitmap_state
+llbitmap_rmerge_state(struct llbitmap *llbitmap,
+		      enum llbitmap_state dst,
+		      enum llbitmap_state src)
+{
+	bool level_456 = raid_is_456(llbitmap->mddev);
+
+	if (dst == BitNeedSync || dst == BitSyncing ||
+	    src == BitNeedSync || src == BitSyncing)
+		return BitNeedSync;
+
+	if (dst == BitDirty || src == BitDirty)
+		return BitDirty;
+
+	/*
+	 * Reshape generates valid target parity/data for both already-written
+	 * and not-yet-written regions in the checkpointed range, so a mix of
+	 * clean and unwritten still results in a clean destination bit.
+	 */
+	if (level_456 && ((dst == BitClean && src == BitUnwritten) ||
+			  (src == BitClean && dst == BitUnwritten)))
+		return BitClean;
+	if (dst == BitClean || src == BitClean)
+		return BitClean;
+	return BitUnwritten;
+}
+
 static void llbitmap_init_state(struct llbitmap *llbitmap)
 {
 	enum llbitmap_state state = BitUnwritten;
@@ -1656,6 +1691,120 @@ static int llbitmap_reshape_can_start(struct mddev *mddev)
 	return ret;
 }
 
+struct llbitmap_reshape_range {
+	sector_t offset;
+	unsigned long sectors;
+	sector_t start;
+	sector_t end;
+};
+
+static enum llbitmap_state
+llbitmap_reshape_init_dst(struct llbitmap *llbitmap, unsigned long dst,
+			  const struct llbitmap_reshape_range *new)
+{
+	u64 bit_start = (u64)dst * llbitmap->reshape_chunksize;
+	u64 bit_end = bit_start + llbitmap->reshape_chunksize;
+
+	if (!llbitmap->mddev->reshape_backwards)
+		return bit_start < new->offset ? llbitmap_read(llbitmap, dst) :
+		       BitUnwritten;
+	return bit_end > new->end ? llbitmap_read(llbitmap, dst) : BitUnwritten;
+}
+
+static void llbitmap_reshape_dst_range(struct llbitmap *llbitmap,
+				       unsigned long dst,
+				       const struct llbitmap_reshape_range *new,
+				       struct llbitmap_reshape_range *dst_range)
+{
+	sector_t dst_bit_start = (sector_t)dst * llbitmap->reshape_chunksize;
+
+	dst_range->start = max(dst_bit_start, new->offset);
+	dst_range->end = min(dst_bit_start + llbitmap->reshape_chunksize,
+			     new->end);
+	dst_range->offset = dst_range->start;
+	dst_range->sectors = dst_range->end - dst_range->start;
+}
+
+static void llbitmap_reshape_map_range(struct llbitmap *llbitmap,
+				       sector_t lo, sector_t hi,
+				       bool previous,
+				       struct llbitmap_reshape_range *range)
+{
+	range->offset = lo;
+	range->sectors = hi - lo;
+	llbitmap_map_layout(llbitmap, &range->offset, &range->sectors, previous);
+	range->start = range->offset;
+	range->end = range->offset + range->sectors;
+}
+
+static bool llbitmap_reshape_src_range(const struct llbitmap_reshape_range *old,
+				       const struct llbitmap_reshape_range *new,
+				       const struct llbitmap_reshape_range *dst,
+				       struct llbitmap_reshape_range *src)
+{
+	if (!old->sectors)
+		return false;
+
+	src->start = old->offset +
+		mul_u64_u64_div_u64(dst->start - new->offset,
+				    old->sectors, new->sectors);
+	src->end = old->offset +
+		mul_u64_u64_div_u64_roundup(dst->end - new->offset,
+					    old->sectors, new->sectors);
+	if (src->end > old->end)
+		src->end = old->end;
+	src->offset = src->start;
+	src->sectors = src->end - src->start;
+
+	return src->sectors;
+}
+
+static enum llbitmap_state llbitmap_rmerge_src(struct llbitmap *llbitmap,
+					       enum llbitmap_state state,
+					       const struct llbitmap_reshape_range *src)
+{
+	unsigned long bit = div64_u64(src->start, llbitmap->chunksize);
+	unsigned long end = div64_u64(src->end - 1, llbitmap->chunksize);
+
+	while (bit <= end) {
+		enum llbitmap_state src_state = llbitmap_read(llbitmap, bit);
+
+		state = llbitmap_rmerge_state(llbitmap, state, src_state);
+		bit++;
+	}
+
+	return state;
+}
+
+static void llbitmap_reshape_merge(struct llbitmap *llbitmap,
+				   const struct llbitmap_reshape_range *old,
+				   const struct llbitmap_reshape_range *new)
+{
+	unsigned long dst_start;
+	unsigned long dst_end;
+	unsigned long dst;
+
+	if (!new->sectors)
+		return;
+
+	dst_start = div64_u64(new->offset, llbitmap->reshape_chunksize);
+	dst_end = div64_u64(new->end - 1, llbitmap->reshape_chunksize);
+
+	for (dst = dst_start; dst <= dst_end; dst++) {
+		struct llbitmap_reshape_range dst_range;
+		struct llbitmap_reshape_range src;
+		enum llbitmap_state state;
+
+		llbitmap_reshape_dst_range(llbitmap, dst, new, &dst_range);
+		state = llbitmap_reshape_init_dst(llbitmap, dst, new);
+		if (llbitmap_reshape_src_range(old, new, &dst_range, &src))
+			state = llbitmap_rmerge_src(llbitmap, state, &src);
+		else
+			state = llbitmap_rmerge_state(llbitmap, state, BitUnwritten);
+		llbitmap_write(llbitmap, state, dst);
+	}
+}
+
 static void llbitmap_reshape_finish(struct mddev *mddev)
 {
 	struct llbitmap *llbitmap = mddev->bitmap;
@@ -1680,6 +1829,28 @@ static void llbitmap_reshape_finish(struct mddev *mddev)
 		mddev->pers->quiesce(mddev, 0);
 }
 
+static void llbitmap_reshape_mark(struct mddev *mddev, sector_t old_pos,
+				  sector_t new_pos)
+{
+	struct llbitmap *llbitmap = mddev->bitmap;
+	sector_t lo;
+	sector_t hi;
+	struct llbitmap_reshape_range old;
+	struct llbitmap_reshape_range new;
+
+	if (!llbitmap || old_pos == new_pos)
+		return;
+
+	lo = min(old_pos, new_pos);
+	hi = max(old_pos, new_pos);
+	if (!hi)
+		return;
+
+	llbitmap_reshape_map_range(llbitmap, lo, hi, true, &old);
+	llbitmap_reshape_map_range(llbitmap, lo, hi, false, &new);
+	llbitmap_reshape_merge(llbitmap, &old, &new);
+}
+
 static void llbitmap_write_sb(struct llbitmap *llbitmap)
 {
 	int nr_blocks = DIV_ROUND_UP(BITMAP_DATA_OFFSET, llbitmap->io_size);
@@ -1937,6 +2108,7 @@ static struct bitmap_operations llbitmap_ops = {
 	.prepare_range		= llbitmap_prepare_range,
 	.reshape_finish		= llbitmap_reshape_finish,
 	.reshape_can_start	= llbitmap_reshape_can_start,
+	.reshape_mark		= llbitmap_reshape_mark,
 	.write_all		= llbitmap_write_all,
 
 	.group			= &md_llbitmap_group,
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/md-llbitmap: clamp state-machine walks to tracked bits
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (10 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: remap checkpointed bits as reshape progresses Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/raid10: reject llbitmap chunk shrink during reshape Yu Kuai
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

llbitmap_state_machine() can be called with an end bit beyond
llbitmap->chunks. In particular, llbitmap_cond_end_sync() passes
sector >> chunkshift, and sector can reach the tracked boundary
exactly.

Clamp the state-machine range to llbitmap->chunks so it cannot walk
past the tracked bitmap.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/md-llbitmap.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/md/md-llbitmap.c b/drivers/md/md-llbitmap.c
index b3ff67779557..6f81ccf3e884 100644
--- a/drivers/md/md-llbitmap.c
+++ b/drivers/md/md-llbitmap.c
@@ -853,7 +853,10 @@ static enum llbitmap_state llbitmap_state_machine(struct llbitmap *llbitmap,
 		llbitmap_init_state(llbitmap);
 		return BitNone;
 	}
-
+	if (start >= llbitmap->chunks)
+		return BitNone;
+	if (end >= llbitmap->chunks)
+		end = llbitmap->chunks - 1;
 	while (start <= end) {
 		enum llbitmap_state c = llbitmap_read(llbitmap, start);
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/raid10: reject llbitmap chunk shrink during reshape
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (11 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/md-llbitmap: clamp state-machine walks to tracked bits Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/raid10: wire llbitmap reshape lifecycle Yu Kuai
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

llbitmap reshape keeps one live bitmap and only supports growing the
tracked chunk geometry. Reject RAID10 reshape attempts that would shrink
the llbitmap chunk size.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/raid10.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 4901ebe45c87..ad19257c79fb 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -4260,6 +4260,10 @@ static int raid10_check_reshape(struct mddev *mddev)
 
 	if (conf->geo.far_copies != 1 && !conf->geo.far_offset)
 		return -EINVAL;
+	if (mddev->bitmap_id == ID_LLBITMAP &&
+	    mddev->new_chunk_sectors &&
+	    mddev->new_chunk_sectors < mddev->chunk_sectors)
+		return -EOPNOTSUPP;
 
 	if (setup_geo(&geo, mddev, geo_start) != conf->copies)
 		/* mustn't change number of copies */
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/raid10: wire llbitmap reshape lifecycle
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (12 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/raid10: reject llbitmap chunk shrink during reshape Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-30  2:37   ` kernel test robot
  2026-04-19  3:09 ` [PATCH] md/raid10: split reshape bios before bitmap accounting Yu Kuai
                   ` (4 subsequent siblings)
  18 siblings, 1 reply; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Prepare llbitmap before RAID10 starts growing, checkpoint the bitmap
before advancing reshape_position, finish the llbitmap geometry update
when reshape completes, and export the old and new tracked sizes.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/raid10.c | 39 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 39 insertions(+)

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index ad19257c79fb..13e31d01ed0f 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -4370,6 +4370,12 @@ static int raid10_start_reshape(struct mddev *mddev)
 
 	if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
 		return -EBUSY;
+	if (md_bitmap_enabled(mddev, false) &&
+	    mddev->bitmap_ops->reshape_can_start) {
+		ret = mddev->bitmap_ops->reshape_can_start(mddev);
+		if (ret)
+			return ret;
+	}
 
 	if (setup_geo(&new, mddev, geo_start) != conf->copies)
 		return -EINVAL;
@@ -4683,6 +4689,13 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr,
 	    time_after(jiffies, conf->reshape_checkpoint + 10*HZ)) {
 		/* Need to update reshape_position in metadata */
 		wait_barrier(conf, false);
+		if (md_bitmap_enabled(mddev, false) &&
+		    mddev->bitmap_ops->reshape_mark &&
+		    conf->reshape_safe != conf->reshape_progress) {
+			mddev->bitmap_ops->reshape_mark(mddev, conf->reshape_safe,
+						       conf->reshape_progress);
+			mddev->bitmap_ops->unplug(mddev, true);
+		}
 		mddev->reshape_position = conf->reshape_progress;
 		if (mddev->reshape_backwards)
 			mddev->curr_resync_completed = raid10_size(mddev, 0, 0)
@@ -4881,9 +4894,19 @@ static void reshape_request_write(struct mddev *mddev, struct r10bio *r10_bio)
 
 static void end_reshape(struct r10conf *conf)
 {
+	struct mddev *mddev = conf->mddev;
+
 	if (test_bit(MD_RECOVERY_INTR, &conf->mddev->recovery))
 		return;
 
+	if (md_bitmap_enabled(mddev, false) &&
+	    mddev->bitmap_ops->reshape_mark &&
+	    conf->reshape_safe != conf->reshape_progress) {
+		mddev->bitmap_ops->reshape_mark(mddev, conf->reshape_safe,
+					       conf->reshape_progress);
+		mddev->bitmap_ops->unplug(mddev, true);
+	}
+
 	spin_lock_irq(&conf->device_lock);
 	conf->prev = conf->geo;
 	md_finish_reshape(conf->mddev);
@@ -5015,10 +5038,15 @@ static void end_reshape_request(struct r10bio *r10_bio)
 static void raid10_finish_reshape(struct mddev *mddev)
 {
 	struct r10conf *conf = mddev->private;
+	bool llbitmap = mddev->bitmap_id == ID_LLBITMAP &&
+		md_bitmap_enabled(mddev, false);
 
 	if (test_bit(MD_RECOVERY_INTR, &mddev->recovery))
 		return;
 
+	if (llbitmap && mddev->bitmap_ops->reshape_finish)
+		mddev->bitmap_ops->reshape_finish(mddev);
+
 	if (mddev->delta_disks > 0) {
 		if (mddev->resync_offset > mddev->resync_max_sectors) {
 			mddev->resync_offset = mddev->resync_max_sectors;
@@ -5045,6 +5073,15 @@ static void raid10_finish_reshape(struct mddev *mddev)
 	mddev->reshape_backwards = 0;
 }
 
+static sector_t raid10_bitmap_sync_size(struct mddev *mddev, bool previous)
+{
+	struct r10conf *conf = mddev->private;
+
+	if (previous)
+		return raid10_size(mddev, 0, 0);
+	return raid10_size(mddev, 0, conf->geo.raid_disks);
+}
+
 static struct md_personality raid10_personality =
 {
 	.head = {
@@ -5071,6 +5108,8 @@ static struct md_personality raid10_personality =
 	.start_reshape	= raid10_start_reshape,
 	.finish_reshape	= raid10_finish_reshape,
 	.update_reshape_pos = raid10_update_reshape_pos,
+	.bitmap_sync_size = raid10_bitmap_sync_size,
+	.bitmap_array_sectors = raid10_bitmap_sync_size,
 };
 
 static int __init raid10_init(void)
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/raid10: split reshape bios before bitmap accounting
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (13 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/raid10: wire llbitmap reshape lifecycle Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/raid5: add exact old and new llbitmap mapping helpers Yu Kuai
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Use the shared mddev_bio_split_at_reshape_offset() helper so RAID10
submits only one-side bios to llbitmap during reshape.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/raid10.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 13e31d01ed0f..b54cb1828a48 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1891,6 +1891,12 @@ static bool raid10_make_request(struct mddev *mddev, struct bio *bio)
 		sectors = chunk_sects -
 			(bio->bi_iter.bi_sector &
 			 (chunk_sects - 1));
+
+	bio = mddev_bio_split_at_reshape_offset(mddev, bio, &sectors,
+						&conf->bio_split);
+	if (!bio)
+		return true;
+
 	__make_request(mddev, bio, sectors);
 
 	/* In case raid10d snuck in to freeze_array */
@@ -4264,7 +4270,6 @@ static int raid10_check_reshape(struct mddev *mddev)
 	    mddev->new_chunk_sectors &&
 	    mddev->new_chunk_sectors < mddev->chunk_sectors)
 		return -EOPNOTSUPP;
-
 	if (setup_geo(&geo, mddev, geo_start) != conf->copies)
 		/* mustn't change number of copies */
 		return -EINVAL;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/raid5: add exact old and new llbitmap mapping helpers
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (14 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/raid10: split reshape bios before bitmap accounting Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-05-01 18:51   ` kernel test robot
  2026-04-19  3:09 ` [PATCH] md/raid5: reject llbitmap chunk shrink during reshape Yu Kuai
                   ` (2 subsequent siblings)
  18 siblings, 1 reply; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Teach RAID5 to export exact old and new llbitmap mappings and the
corresponding sync and array sizes for reshape-aware bitmap users.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/raid5.c | 70 ++++++++++++++++++++++++++++++++++------------
 1 file changed, 52 insertions(+), 18 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 1f8360d4cdb7..0c58c175bad9 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -5898,25 +5898,43 @@ static enum reshape_loc get_reshape_loc(struct mddev *mddev,
 	return LOC_BEHIND_RESHAPE;
 }
 
-static void raid5_bitmap_sector(struct mddev *mddev, sector_t *offset,
-				unsigned long *sectors)
+static void raid5_bitmap_sector_map(struct mddev *mddev, sector_t *offset,
+				    unsigned long *sectors,
+				    bool previous)
 {
 	struct r5conf *conf = mddev->private;
 	sector_t start = *offset;
 	sector_t end = start + *sectors;
-	sector_t prev_start = start;
-	sector_t prev_end = end;
 	int sectors_per_chunk;
-	enum reshape_loc loc;
 	int dd_idx;
 
-	sectors_per_chunk = conf->chunk_sectors *
-		(conf->raid_disks - conf->max_degraded);
+	if (previous)
+		sectors_per_chunk = conf->prev_chunk_sectors *
+			(conf->previous_raid_disks - conf->max_degraded);
+	else
+		sectors_per_chunk = conf->chunk_sectors *
+			(conf->raid_disks - conf->max_degraded);
 	start = round_down(start, sectors_per_chunk);
 	end = round_up(end, sectors_per_chunk);
 
-	start = raid5_compute_sector(conf, start, 0, &dd_idx, NULL);
-	end = raid5_compute_sector(conf, end, 0, &dd_idx, NULL);
+	start = raid5_compute_sector(conf, start, previous, &dd_idx, NULL);
+	end = raid5_compute_sector(conf, end, previous, &dd_idx, NULL);
+	*offset = start;
+	*sectors = end - start;
+}
+
+static void raid5_bitmap_sector(struct mddev *mddev, sector_t *offset,
+				unsigned long *sectors)
+{
+	struct r5conf *conf = mddev->private;
+	sector_t start = *offset;
+	sector_t end = start + *sectors;
+	sector_t prev_start = start;
+	unsigned long prev_sectors = end - start;
+	enum reshape_loc loc;
+
+	raid5_bitmap_sector_map(mddev, &start, sectors, false);
+	end = start + *sectors;
 
 	/*
 	 * For LOC_INSIDE_RESHAPE, this IO will wait for reshape to make
@@ -5925,17 +5943,10 @@ static void raid5_bitmap_sector(struct mddev *mddev, sector_t *offset,
 	loc = get_reshape_loc(mddev, conf, prev_start);
 	if (likely(loc != LOC_AHEAD_OF_RESHAPE)) {
 		*offset = start;
-		*sectors = end - start;
 		return;
 	}
 
-	sectors_per_chunk = conf->prev_chunk_sectors *
-		(conf->previous_raid_disks - conf->max_degraded);
-	prev_start = round_down(prev_start, sectors_per_chunk);
-	prev_end = round_down(prev_end, sectors_per_chunk);
-
-	prev_start = raid5_compute_sector(conf, prev_start, 1, &dd_idx, NULL);
-	prev_end = raid5_compute_sector(conf, prev_end, 1, &dd_idx, NULL);
+	raid5_bitmap_sector_map(mddev, &prev_start, &prev_sectors, true);
 
 	/*
 	 * for LOC_AHEAD_OF_RESHAPE, reshape can make progress before this IO
@@ -5943,7 +5954,7 @@ static void raid5_bitmap_sector(struct mddev *mddev, sector_t *offset,
 	 * we set bits for both.
 	 */
 	*offset = min(start, prev_start);
-	*sectors = max(end, prev_end) - *offset;
+	*sectors = max(end, prev_start + prev_sectors) - *offset;
 }
 
 static enum stripe_result make_stripe_request(struct mddev *mddev,
@@ -9002,6 +9013,20 @@ static void raid5_prepare_suspend(struct mddev *mddev)
 	wake_up(&conf->wait_for_reshape);
 }
 
+static sector_t raid5_bitmap_sync_size(struct mddev *mddev, bool previous)
+{
+	return mddev->dev_sectors;
+}
+
+static sector_t raid5_bitmap_array_sectors(struct mddev *mddev, bool previous)
+{
+	struct r5conf *conf = mddev->private;
+
+	if (previous)
+		return raid5_size(mddev, 0, 0);
+	return raid5_size(mddev, mddev->dev_sectors, conf->raid_disks);
+}
+
 static struct md_personality raid6_personality =
 {
 	.head = {
@@ -9031,6 +9056,9 @@ static struct md_personality raid6_personality =
 	.change_consistency_policy = raid5_change_consistency_policy,
 	.prepare_suspend = raid5_prepare_suspend,
 	.bitmap_sector	= raid5_bitmap_sector,
+	.bitmap_sector_map = raid5_bitmap_sector_map,
+	.bitmap_sync_size = raid5_bitmap_sync_size,
+	.bitmap_array_sectors = raid5_bitmap_array_sectors,
 };
 static struct md_personality raid5_personality =
 {
@@ -9061,6 +9089,9 @@ static struct md_personality raid5_personality =
 	.change_consistency_policy = raid5_change_consistency_policy,
 	.prepare_suspend = raid5_prepare_suspend,
 	.bitmap_sector	= raid5_bitmap_sector,
+	.bitmap_sector_map = raid5_bitmap_sector_map,
+	.bitmap_sync_size = raid5_bitmap_sync_size,
+	.bitmap_array_sectors = raid5_bitmap_array_sectors,
 };
 
 static struct md_personality raid4_personality =
@@ -9092,6 +9123,9 @@ static struct md_personality raid4_personality =
 	.change_consistency_policy = raid5_change_consistency_policy,
 	.prepare_suspend = raid5_prepare_suspend,
 	.bitmap_sector	= raid5_bitmap_sector,
+	.bitmap_sector_map = raid5_bitmap_sector_map,
+	.bitmap_sync_size = raid5_bitmap_sync_size,
+	.bitmap_array_sectors = raid5_bitmap_array_sectors,
 };
 
 static int __init raid5_init(void)
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/raid5: reject llbitmap chunk shrink during reshape
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (15 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/raid5: add exact old and new llbitmap mapping helpers Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/raid5: wire llbitmap reshape lifecycle Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/raid5: split reshape bios before bitmap accounting Yu Kuai
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

llbitmap reshape keeps one live bitmap and only supports growing the
tracked chunk geometry. Reject RAID5 reshape attempts that would shrink
the llbitmap chunk size.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/raid5.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 0c58c175bad9..178283aa91a4 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -8459,6 +8459,9 @@ static int check_reshape(struct mddev *mddev)
 	if (!check_stripe_cache(mddev))
 		return -ENOSPC;
 
+	if (mddev->bitmap_id == ID_LLBITMAP &&
+	    mddev->new_chunk_sectors < mddev->chunk_sectors)
+		return -EOPNOTSUPP;
 	if (mddev->new_chunk_sectors > mddev->chunk_sectors ||
 	    mddev->delta_disks > 0)
 		if (resize_chunks(conf,
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/raid5: wire llbitmap reshape lifecycle
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (16 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/raid5: reject llbitmap chunk shrink during reshape Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-19  3:09 ` [PATCH] md/raid5: split reshape bios before bitmap accounting Yu Kuai
  18 siblings, 0 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Prepare llbitmap before RAID5 reshape starts, checkpoint the bitmap
before advancing reshape_position, and finish the llbitmap geometry
update when reshape completes.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/raid5.c | 37 +++++++++++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 178283aa91a4..476f6fc5a97c 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -6378,6 +6378,13 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk
 			   || test_bit(MD_RECOVERY_INTR, &mddev->recovery));
 		if (atomic_read(&conf->reshape_stripes) != 0)
 			return 0;
+		if (md_bitmap_enabled(mddev, false) &&
+		    mddev->bitmap_ops->reshape_mark &&
+		    conf->reshape_safe != conf->reshape_progress) {
+			mddev->bitmap_ops->reshape_mark(mddev, conf->reshape_safe,
+						       conf->reshape_progress);
+			mddev->bitmap_ops->unplug(mddev, true);
+		}
 		mddev->reshape_position = conf->reshape_progress;
 		mddev->curr_resync_completed = sector_nr;
 		if (!mddev->reshape_backwards)
@@ -6487,6 +6494,13 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, int *sk
 			   || test_bit(MD_RECOVERY_INTR, &mddev->recovery));
 		if (atomic_read(&conf->reshape_stripes) != 0)
 			goto ret;
+		if (md_bitmap_enabled(mddev, false) &&
+		    mddev->bitmap_ops->reshape_mark &&
+		    conf->reshape_safe != conf->reshape_progress) {
+			mddev->bitmap_ops->reshape_mark(mddev, conf->reshape_safe,
+						       conf->reshape_progress);
+			mddev->bitmap_ops->unplug(mddev, true);
+		}
 		mddev->reshape_position = conf->reshape_progress;
 		mddev->curr_resync_completed = sector_nr;
 		if (!mddev->reshape_backwards)
@@ -8524,6 +8538,12 @@ static int raid5_start_reshape(struct mddev *mddev)
 			mdname(mddev));
 		return -EINVAL;
 	}
+	if (md_bitmap_enabled(mddev, false) &&
+	    mddev->bitmap_id == ID_LLBITMAP) {
+		i = mddev->bitmap_ops->resize(mddev, mddev->dev_sectors, 0);
+		if (i)
+			return i;
+	}
 
 	atomic_set(&conf->reshape_stripes, 0);
 	spin_lock_irq(&conf->device_lock);
@@ -8608,10 +8628,19 @@ static int raid5_start_reshape(struct mddev *mddev)
  */
 static void end_reshape(struct r5conf *conf)
 {
+	struct mddev *mddev = conf->mddev;
 
 	if (!test_bit(MD_RECOVERY_INTR, &conf->mddev->recovery)) {
 		struct md_rdev *rdev;
 
+		if (md_bitmap_enabled(mddev, false) &&
+		    mddev->bitmap_ops->reshape_mark &&
+		    conf->reshape_safe != conf->reshape_progress) {
+			mddev->bitmap_ops->reshape_mark(mddev, conf->reshape_safe,
+						       conf->reshape_progress);
+			mddev->bitmap_ops->unplug(mddev, true);
+		}
+
 		spin_lock_irq(&conf->device_lock);
 		conf->previous_raid_disks = conf->raid_disks;
 		md_finish_reshape(conf->mddev);
@@ -8638,8 +8667,16 @@ static void raid5_finish_reshape(struct mddev *mddev)
 {
 	struct r5conf *conf = mddev->private;
 	struct md_rdev *rdev;
+	bool llbitmap = mddev->bitmap_id == ID_LLBITMAP &&
+		md_bitmap_enabled(mddev, false);
 
 	if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery)) {
+		if (llbitmap && mddev->bitmap_ops->reshape_finish)
+			mddev->bitmap_ops->reshape_finish(mddev);
+		if (llbitmap) {
+			mddev->resync_offset = 0;
+			mddev->resync_max_sectors = mddev->dev_sectors;
+		}
 
 		if (mddev->delta_disks <= 0) {
 			int d;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH] md/raid5: split reshape bios before bitmap accounting
  2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
                   ` (17 preceding siblings ...)
  2026-04-19  3:09 ` [PATCH] md/raid5: wire llbitmap reshape lifecycle Yu Kuai
@ 2026-04-19  3:09 ` Yu Kuai
  2026-04-30  0:59   ` kernel test robot
                     ` (2 more replies)
  18 siblings, 3 replies; 25+ messages in thread
From: Yu Kuai @ 2026-04-19  3:09 UTC (permalink / raw)
  To: linux-raid; +Cc: linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Use the shared mddev_bio_split_at_reshape_offset() helper so RAID5
submits only one-side bios to llbitmap during reshape.

Signed-off-by: Yu Kuai <yukuai@fnnas.com>
---
 drivers/md/raid5.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 476f6fc5a97c..7fa74c60c7d8 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -6134,6 +6134,14 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
 		return true;
 	}
 
+	bi = mddev_bio_split_at_reshape_offset(mddev, bi, NULL,
+					       &conf->bio_split);
+	if (!bi) {
+		if (rw == WRITE)
+			md_write_end(mddev);
+		return true;
+	}
+
 	logical_sector = bi->bi_iter.bi_sector & ~((sector_t)RAID5_STRIPE_SECTORS(conf)-1);
 	bi->bi_next = NULL;
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH] md/raid5: split reshape bios before bitmap accounting
  2026-04-19  3:09 ` [PATCH] md/raid5: split reshape bios before bitmap accounting Yu Kuai
@ 2026-04-30  0:59   ` kernel test robot
  2026-04-30  4:07   ` kernel test robot
  2026-04-30 19:48   ` kernel test robot
  2 siblings, 0 replies; 25+ messages in thread
From: kernel test robot @ 2026-04-30  0:59 UTC (permalink / raw)
  To: Yu Kuai, linux-raid
  Cc: llvm, oe-kbuild-all, linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Hi Yu,

kernel test robot noticed the following build errors:

[auto build test ERROR on linus/master]
[also build test ERROR on song-md/md-next v7.1-rc1 next-20260429]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Yu-Kuai/md-raid5-split-reshape-bios-before-bitmap-accounting/20260425-083941
base:   linus/master
patch link:    https://lore.kernel.org/r/20260419030942.824195-20-yukuai%40fnnas.com
patch subject: [PATCH] md/raid5: split reshape bios before bitmap accounting
config: um-randconfig-001-20260430 (https://download.01.org/0day-ci/archive/20260430/202604300803.nq5tYBQB-lkp@intel.com/config)
compiler: clang version 23.0.0git (https://github.com/llvm/llvm-project 5bac06718f502014fade905512f1d26d578a18f3)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260430/202604300803.nq5tYBQB-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202604300803.nq5tYBQB-lkp@intel.com/

All errors (new ones prefixed by >>):

   drivers/md/raid5.c:4221:7: warning: variable 'qread' set but not used [-Wunused-but-set-variable]
    4221 |                 int qread =0;
         |                     ^
>> drivers/md/raid5.c:6126:7: error: call to undeclared function 'mddev_bio_split_at_reshape_offset'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
    6126 |         bi = mddev_bio_split_at_reshape_offset(mddev, bi, NULL,
         |              ^
>> drivers/md/raid5.c:6126:5: error: incompatible integer to pointer conversion assigning to 'struct bio *' from 'int' [-Wint-conversion]
    6126 |         bi = mddev_bio_split_at_reshape_offset(mddev, bi, NULL,
         |            ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    6127 |                                                &conf->bio_split);
         |                                                ~~~~~~~~~~~~~~~~~
   1 warning and 2 errors generated.


vim +/mddev_bio_split_at_reshape_offset +6126 drivers/md/raid5.c

  6083	
  6084	static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
  6085	{
  6086		DEFINE_WAIT_FUNC(wait, woken_wake_function);
  6087		struct r5conf *conf = mddev->private;
  6088		const int rw = bio_data_dir(bi);
  6089		struct stripe_request_ctx *ctx;
  6090		sector_t logical_sector;
  6091		enum stripe_result res;
  6092		int s, stripe_cnt;
  6093		bool on_wq;
  6094	
  6095		if (unlikely(bi->bi_opf & REQ_PREFLUSH)) {
  6096			int ret = log_handle_flush_request(conf, bi);
  6097	
  6098			if (ret == 0)
  6099				return true;
  6100			if (ret == -ENODEV) {
  6101				if (md_flush_request(mddev, bi))
  6102					return true;
  6103			}
  6104			/* ret == -EAGAIN, fallback */
  6105		}
  6106	
  6107		md_write_start(mddev, bi);
  6108		/*
  6109		 * If array is degraded, better not do chunk aligned read because
  6110		 * later we might have to read it again in order to reconstruct
  6111		 * data on failed drives.
  6112		 */
  6113		if (rw == READ && mddev->degraded == 0 &&
  6114		    mddev->reshape_position == MaxSector) {
  6115			bi = chunk_aligned_read(mddev, bi);
  6116			if (!bi)
  6117				return true;
  6118		}
  6119	
  6120		if (unlikely(bio_op(bi) == REQ_OP_DISCARD)) {
  6121			make_discard_request(mddev, bi);
  6122			md_write_end(mddev);
  6123			return true;
  6124		}
  6125	
> 6126		bi = mddev_bio_split_at_reshape_offset(mddev, bi, NULL,
  6127						       &conf->bio_split);
  6128		if (!bi) {
  6129			if (rw == WRITE)
  6130				md_write_end(mddev);
  6131			return true;
  6132		}
  6133	
  6134		logical_sector = bi->bi_iter.bi_sector & ~((sector_t)RAID5_STRIPE_SECTORS(conf)-1);
  6135		bi->bi_next = NULL;
  6136	
  6137		ctx = mempool_alloc(conf->ctx_pool, GFP_NOIO);
  6138		memset(ctx, 0, conf->ctx_size);
  6139		ctx->first_sector = logical_sector;
  6140		ctx->last_sector = bio_end_sector(bi);
  6141		/*
  6142		 * if r5l_handle_flush_request() didn't clear REQ_PREFLUSH,
  6143		 * we need to flush journal device
  6144		 */
  6145		if (unlikely(bi->bi_opf & REQ_PREFLUSH))
  6146			ctx->do_flush = true;
  6147	
  6148		stripe_cnt = DIV_ROUND_UP_SECTOR_T(ctx->last_sector - logical_sector,
  6149						   RAID5_STRIPE_SECTORS(conf));
  6150		bitmap_set(ctx->sectors_to_do, 0, stripe_cnt);
  6151	
  6152		pr_debug("raid456: %s, logical %llu to %llu\n", __func__,
  6153			 bi->bi_iter.bi_sector, ctx->last_sector);
  6154	
  6155		/* Bail out if conflicts with reshape and REQ_NOWAIT is set */
  6156		if ((bi->bi_opf & REQ_NOWAIT) &&
  6157		    get_reshape_loc(mddev, conf, logical_sector) == LOC_INSIDE_RESHAPE) {
  6158			bio_wouldblock_error(bi);
  6159			if (rw == WRITE)
  6160				md_write_end(mddev);
  6161			mempool_free(ctx, conf->ctx_pool);
  6162			return true;
  6163		}
  6164		md_account_bio(mddev, &bi);
  6165	
  6166		/*
  6167		 * Lets start with the stripe with the lowest chunk offset in the first
  6168		 * chunk. That has the best chances of creating IOs adjacent to
  6169		 * previous IOs in case of sequential IO and thus creates the most
  6170		 * sequential IO pattern. We don't bother with the optimization when
  6171		 * reshaping as the performance benefit is not worth the complexity.
  6172		 */
  6173		if (likely(conf->reshape_progress == MaxSector)) {
  6174			logical_sector = raid5_bio_lowest_chunk_sector(conf, bi);
  6175			on_wq = false;
  6176		} else {
  6177			add_wait_queue(&conf->wait_for_reshape, &wait);
  6178			on_wq = true;
  6179		}
  6180		s = (logical_sector - ctx->first_sector) >> RAID5_STRIPE_SHIFT(conf);
  6181	
  6182		while (1) {
  6183			res = make_stripe_request(mddev, conf, ctx, logical_sector,
  6184						  bi);
  6185			if (res == STRIPE_FAIL || res == STRIPE_WAIT_RESHAPE)
  6186				break;
  6187	
  6188			if (res == STRIPE_RETRY)
  6189				continue;
  6190	
  6191			if (res == STRIPE_SCHEDULE_AND_RETRY) {
  6192				WARN_ON_ONCE(!on_wq);
  6193				/*
  6194				 * Must release the reference to batch_last before
  6195				 * scheduling and waiting for work to be done,
  6196				 * otherwise the batch_last stripe head could prevent
  6197				 * raid5_activate_delayed() from making progress
  6198				 * and thus deadlocking.
  6199				 */
  6200				if (ctx->batch_last) {
  6201					raid5_release_stripe(ctx->batch_last);
  6202					ctx->batch_last = NULL;
  6203				}
  6204	
  6205				wait_woken(&wait, TASK_UNINTERRUPTIBLE,
  6206					   MAX_SCHEDULE_TIMEOUT);
  6207				continue;
  6208			}
  6209	
  6210			s = find_next_bit_wrap(ctx->sectors_to_do, stripe_cnt, s);
  6211			if (s == stripe_cnt)
  6212				break;
  6213	
  6214			logical_sector = ctx->first_sector +
  6215				(s << RAID5_STRIPE_SHIFT(conf));
  6216		}
  6217		if (unlikely(on_wq))
  6218			remove_wait_queue(&conf->wait_for_reshape, &wait);
  6219	
  6220		if (ctx->batch_last)
  6221			raid5_release_stripe(ctx->batch_last);
  6222	
  6223		if (rw == WRITE)
  6224			md_write_end(mddev);
  6225	
  6226		mempool_free(ctx, conf->ctx_pool);
  6227		if (res == STRIPE_WAIT_RESHAPE) {
  6228			md_free_cloned_bio(bi);
  6229			return false;
  6230		}
  6231	
  6232		bio_endio(bi);
  6233		return true;
  6234	}
  6235	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH] md/raid10: wire llbitmap reshape lifecycle
  2026-04-19  3:09 ` [PATCH] md/raid10: wire llbitmap reshape lifecycle Yu Kuai
@ 2026-04-30  2:37   ` kernel test robot
  0 siblings, 0 replies; 25+ messages in thread
From: kernel test robot @ 2026-04-30  2:37 UTC (permalink / raw)
  To: Yu Kuai, linux-raid
  Cc: llvm, oe-kbuild-all, linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Hi Yu,

kernel test robot noticed the following build errors:

[auto build test ERROR on linus/master]
[also build test ERROR on song-md/md-next v7.1-rc1 next-20260429]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Yu-Kuai/md-raid10-wire-llbitmap-reshape-lifecycle/20260423-170302
base:   linus/master
patch link:    https://lore.kernel.org/r/20260419030942.824195-15-yukuai%40fnnas.com
patch subject: [PATCH] md/raid10: wire llbitmap reshape lifecycle
config: um-randconfig-001-20260430 (https://download.01.org/0day-ci/archive/20260430/202604301028.uutGNSgD-lkp@intel.com/config)
compiler: clang version 23.0.0git (https://github.com/llvm/llvm-project 5bac06718f502014fade905512f1d26d578a18f3)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260430/202604301028.uutGNSgD-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202604301028.uutGNSgD-lkp@intel.com/

All errors (new ones prefixed by >>):

>> drivers/md/raid10.c:4370:25: error: no member named 'reshape_can_start' in 'struct bitmap_operations'
    4370 |             mddev->bitmap_ops->reshape_can_start) {
         |             ~~~~~~~~~~~~~~~~~  ^
   drivers/md/raid10.c:4371:28: error: no member named 'reshape_can_start' in 'struct bitmap_operations'
    4371 |                 ret = mddev->bitmap_ops->reshape_can_start(mddev);
         |                       ~~~~~~~~~~~~~~~~~  ^
>> drivers/md/raid10.c:4689:26: error: no member named 'reshape_mark' in 'struct bitmap_operations'
    4689 |                     mddev->bitmap_ops->reshape_mark &&
         |                     ~~~~~~~~~~~~~~~~~  ^
   drivers/md/raid10.c:4691:23: error: no member named 'reshape_mark' in 'struct bitmap_operations'
    4691 |                         mddev->bitmap_ops->reshape_mark(mddev, conf->reshape_safe,
         |                         ~~~~~~~~~~~~~~~~~  ^
   drivers/md/raid10.c:4899:25: error: no member named 'reshape_mark' in 'struct bitmap_operations'
    4899 |             mddev->bitmap_ops->reshape_mark &&
         |             ~~~~~~~~~~~~~~~~~  ^
   drivers/md/raid10.c:4901:22: error: no member named 'reshape_mark' in 'struct bitmap_operations'
    4901 |                 mddev->bitmap_ops->reshape_mark(mddev, conf->reshape_safe,
         |                 ~~~~~~~~~~~~~~~~~  ^
>> drivers/md/raid10.c:5043:37: error: no member named 'reshape_finish' in 'struct bitmap_operations'
    5043 |         if (llbitmap && mddev->bitmap_ops->reshape_finish)
         |                         ~~~~~~~~~~~~~~~~~  ^
   drivers/md/raid10.c:5044:22: error: no member named 'reshape_finish' in 'struct bitmap_operations'
    5044 |                 mddev->bitmap_ops->reshape_finish(mddev);
         |                 ~~~~~~~~~~~~~~~~~  ^
>> drivers/md/raid10.c:5107:3: error: field designator 'bitmap_sync_size' does not refer to any field in type 'struct md_personality'
    5107 |         .bitmap_sync_size = raid10_bitmap_sync_size,
         |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/md/raid10.c:5108:3: error: field designator 'bitmap_array_sectors' does not refer to any field in type 'struct md_personality'
    5108 |         .bitmap_array_sectors = raid10_bitmap_sync_size,
         |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   10 errors generated.


vim +4370 drivers/md/raid10.c

  4345	
  4346	static int raid10_start_reshape(struct mddev *mddev)
  4347	{
  4348		/* A 'reshape' has been requested. This commits
  4349		 * the various 'new' fields and sets MD_RECOVER_RESHAPE
  4350		 * This also checks if there are enough spares and adds them
  4351		 * to the array.
  4352		 * We currently require enough spares to make the final
  4353		 * array non-degraded.  We also require that the difference
  4354		 * between old and new data_offset - on each device - is
  4355		 * enough that we never risk over-writing.
  4356		 */
  4357	
  4358		unsigned long before_length, after_length;
  4359		sector_t min_offset_diff = 0;
  4360		int first = 1;
  4361		struct geom new;
  4362		struct r10conf *conf = mddev->private;
  4363		struct md_rdev *rdev;
  4364		int spares = 0;
  4365		int ret;
  4366	
  4367		if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
  4368			return -EBUSY;
  4369		if (md_bitmap_enabled(mddev, false) &&
> 4370		    mddev->bitmap_ops->reshape_can_start) {
  4371			ret = mddev->bitmap_ops->reshape_can_start(mddev);
  4372			if (ret)
  4373				return ret;
  4374		}
  4375	
  4376		if (setup_geo(&new, mddev, geo_start) != conf->copies)
  4377			return -EINVAL;
  4378	
  4379		before_length = ((1 << conf->prev.chunk_shift) *
  4380				 conf->prev.far_copies);
  4381		after_length = ((1 << conf->geo.chunk_shift) *
  4382				conf->geo.far_copies);
  4383	
  4384		rdev_for_each(rdev, mddev) {
  4385			if (!test_bit(In_sync, &rdev->flags)
  4386			    && !test_bit(Faulty, &rdev->flags))
  4387				spares++;
  4388			if (rdev->raid_disk >= 0) {
  4389				long long diff = (rdev->new_data_offset
  4390						  - rdev->data_offset);
  4391				if (!mddev->reshape_backwards)
  4392					diff = -diff;
  4393				if (diff < 0)
  4394					diff = 0;
  4395				if (first || diff < min_offset_diff)
  4396					min_offset_diff = diff;
  4397				first = 0;
  4398			}
  4399		}
  4400	
  4401		if (max(before_length, after_length) > min_offset_diff)
  4402			return -EINVAL;
  4403	
  4404		if (spares < mddev->delta_disks)
  4405			return -EINVAL;
  4406	
  4407		conf->offset_diff = min_offset_diff;
  4408		spin_lock_irq(&conf->device_lock);
  4409		if (conf->mirrors_new) {
  4410			memcpy(conf->mirrors_new, conf->mirrors,
  4411			       sizeof(struct raid10_info)*conf->prev.raid_disks);
  4412			smp_mb();
  4413			kfree(conf->mirrors_old);
  4414			conf->mirrors_old = conf->mirrors;
  4415			conf->mirrors = conf->mirrors_new;
  4416			conf->mirrors_new = NULL;
  4417		}
  4418		setup_geo(&conf->geo, mddev, geo_start);
  4419		smp_mb();
  4420		if (mddev->reshape_backwards) {
  4421			sector_t size = raid10_size(mddev, 0, 0);
  4422			if (size < mddev->array_sectors) {
  4423				spin_unlock_irq(&conf->device_lock);
  4424				pr_warn("md/raid10:%s: array size must be reduce before number of disks\n",
  4425					mdname(mddev));
  4426				return -EINVAL;
  4427			}
  4428			mddev->resync_max_sectors = size;
  4429			conf->reshape_progress = size;
  4430		} else
  4431			conf->reshape_progress = 0;
  4432		conf->reshape_safe = conf->reshape_progress;
  4433		spin_unlock_irq(&conf->device_lock);
  4434	
  4435		if (mddev->delta_disks && mddev->bitmap) {
  4436			struct mdp_superblock_1 *sb = NULL;
  4437			sector_t oldsize, newsize;
  4438	
  4439			oldsize = raid10_size(mddev, 0, 0);
  4440			newsize = raid10_size(mddev, 0, conf->geo.raid_disks);
  4441	
  4442			if (!mddev_is_clustered(mddev) &&
  4443			    md_bitmap_enabled(mddev, false)) {
  4444				ret = mddev->bitmap_ops->resize(mddev, newsize, 0);
  4445				if (ret)
  4446					goto abort;
  4447				else
  4448					goto out;
  4449			}
  4450	
  4451			rdev_for_each(rdev, mddev) {
  4452				if (rdev->raid_disk > -1 &&
  4453				    !test_bit(Faulty, &rdev->flags))
  4454					sb = page_address(rdev->sb_page);
  4455			}
  4456	
  4457			/*
  4458			 * some node is already performing reshape, and no need to
  4459			 * call bitmap_ops->resize again since it should be called when
  4460			 * receiving BITMAP_RESIZE msg
  4461			 */
  4462			if ((sb && (le32_to_cpu(sb->feature_map) &
  4463				    MD_FEATURE_RESHAPE_ACTIVE)) || (oldsize == newsize))
  4464				goto out;
  4465	
  4466			/* cluster can't be setup without bitmap */
  4467			ret = mddev->bitmap_ops->resize(mddev, newsize, 0);
  4468			if (ret)
  4469				goto abort;
  4470	
  4471			ret = mddev->cluster_ops->resize_bitmaps(mddev, newsize, oldsize);
  4472			if (ret) {
  4473				mddev->bitmap_ops->resize(mddev, oldsize, 0);
  4474				goto abort;
  4475			}
  4476		}
  4477	out:
  4478		if (mddev->delta_disks > 0) {
  4479			rdev_for_each(rdev, mddev)
  4480				if (rdev->raid_disk < 0 &&
  4481				    !test_bit(Faulty, &rdev->flags)) {
  4482					if (raid10_add_disk(mddev, rdev) == 0) {
  4483						if (rdev->raid_disk >=
  4484						    conf->prev.raid_disks)
  4485							set_bit(In_sync, &rdev->flags);
  4486						else
  4487							rdev->recovery_offset = 0;
  4488	
  4489						/* Failure here is OK */
  4490						sysfs_link_rdev(mddev, rdev);
  4491					}
  4492				} else if (rdev->raid_disk >= conf->prev.raid_disks
  4493					   && !test_bit(Faulty, &rdev->flags)) {
  4494					/* This is a spare that was manually added */
  4495					set_bit(In_sync, &rdev->flags);
  4496				}
  4497		}
  4498		/* When a reshape changes the number of devices,
  4499		 * ->degraded is measured against the larger of the
  4500		 * pre and  post numbers.
  4501		 */
  4502		spin_lock_irq(&conf->device_lock);
  4503		mddev->degraded = calc_degraded(conf);
  4504		spin_unlock_irq(&conf->device_lock);
  4505		mddev->raid_disks = conf->geo.raid_disks;
  4506		mddev->reshape_position = conf->reshape_progress;
  4507		set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
  4508	
  4509		clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
  4510		clear_bit(MD_RECOVERY_CHECK, &mddev->recovery);
  4511		clear_bit(MD_RECOVERY_DONE, &mddev->recovery);
  4512		set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
  4513		set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
  4514		conf->reshape_checkpoint = jiffies;
  4515		md_new_event();
  4516		return 0;
  4517	
  4518	abort:
  4519		mddev->recovery = 0;
  4520		spin_lock_irq(&conf->device_lock);
  4521		conf->geo = conf->prev;
  4522		mddev->raid_disks = conf->geo.raid_disks;
  4523		rdev_for_each(rdev, mddev)
  4524			rdev->new_data_offset = rdev->data_offset;
  4525		smp_wmb();
  4526		conf->reshape_progress = MaxSector;
  4527		conf->reshape_safe = MaxSector;
  4528		mddev->reshape_position = MaxSector;
  4529		spin_unlock_irq(&conf->device_lock);
  4530		return ret;
  4531	}
  4532	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH] md/raid5: split reshape bios before bitmap accounting
  2026-04-19  3:09 ` [PATCH] md/raid5: split reshape bios before bitmap accounting Yu Kuai
  2026-04-30  0:59   ` kernel test robot
@ 2026-04-30  4:07   ` kernel test robot
  2026-04-30 19:48   ` kernel test robot
  2 siblings, 0 replies; 25+ messages in thread
From: kernel test robot @ 2026-04-30  4:07 UTC (permalink / raw)
  To: Yu Kuai, linux-raid
  Cc: oe-kbuild-all, linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Hi Yu,

kernel test robot noticed the following build errors:

[auto build test ERROR on linus/master]
[also build test ERROR on v7.1-rc1 next-20260429]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Yu-Kuai/md-raid5-split-reshape-bios-before-bitmap-accounting/20260425-083941
base:   linus/master
patch link:    https://lore.kernel.org/r/20260419030942.824195-20-yukuai%40fnnas.com
patch subject: [PATCH] md/raid5: split reshape bios before bitmap accounting
config: parisc64-defconfig (https://download.01.org/0day-ci/archive/20260430/202604301127.hkMOIzHM-lkp@intel.com/config)
compiler: hppa64-linux-gcc (GCC) 15.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260430/202604301127.hkMOIzHM-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202604301127.hkMOIzHM-lkp@intel.com/

All errors (new ones prefixed by >>):

   drivers/md/raid5.c: In function 'raid5_make_request':
>> drivers/md/raid5.c:6126:14: error: implicit declaration of function 'mddev_bio_split_at_reshape_offset' [-Wimplicit-function-declaration]
    6126 |         bi = mddev_bio_split_at_reshape_offset(mddev, bi, NULL,
         |              ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/md/raid5.c:6126:12: error: assignment to 'struct bio *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
    6126 |         bi = mddev_bio_split_at_reshape_offset(mddev, bi, NULL,
         |            ^


vim +/mddev_bio_split_at_reshape_offset +6126 drivers/md/raid5.c

  6083	
  6084	static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
  6085	{
  6086		DEFINE_WAIT_FUNC(wait, woken_wake_function);
  6087		struct r5conf *conf = mddev->private;
  6088		const int rw = bio_data_dir(bi);
  6089		struct stripe_request_ctx *ctx;
  6090		sector_t logical_sector;
  6091		enum stripe_result res;
  6092		int s, stripe_cnt;
  6093		bool on_wq;
  6094	
  6095		if (unlikely(bi->bi_opf & REQ_PREFLUSH)) {
  6096			int ret = log_handle_flush_request(conf, bi);
  6097	
  6098			if (ret == 0)
  6099				return true;
  6100			if (ret == -ENODEV) {
  6101				if (md_flush_request(mddev, bi))
  6102					return true;
  6103			}
  6104			/* ret == -EAGAIN, fallback */
  6105		}
  6106	
  6107		md_write_start(mddev, bi);
  6108		/*
  6109		 * If array is degraded, better not do chunk aligned read because
  6110		 * later we might have to read it again in order to reconstruct
  6111		 * data on failed drives.
  6112		 */
  6113		if (rw == READ && mddev->degraded == 0 &&
  6114		    mddev->reshape_position == MaxSector) {
  6115			bi = chunk_aligned_read(mddev, bi);
  6116			if (!bi)
  6117				return true;
  6118		}
  6119	
  6120		if (unlikely(bio_op(bi) == REQ_OP_DISCARD)) {
  6121			make_discard_request(mddev, bi);
  6122			md_write_end(mddev);
  6123			return true;
  6124		}
  6125	
> 6126		bi = mddev_bio_split_at_reshape_offset(mddev, bi, NULL,
  6127						       &conf->bio_split);
  6128		if (!bi) {
  6129			if (rw == WRITE)
  6130				md_write_end(mddev);
  6131			return true;
  6132		}
  6133	
  6134		logical_sector = bi->bi_iter.bi_sector & ~((sector_t)RAID5_STRIPE_SECTORS(conf)-1);
  6135		bi->bi_next = NULL;
  6136	
  6137		ctx = mempool_alloc(conf->ctx_pool, GFP_NOIO);
  6138		memset(ctx, 0, conf->ctx_size);
  6139		ctx->first_sector = logical_sector;
  6140		ctx->last_sector = bio_end_sector(bi);
  6141		/*
  6142		 * if r5l_handle_flush_request() didn't clear REQ_PREFLUSH,
  6143		 * we need to flush journal device
  6144		 */
  6145		if (unlikely(bi->bi_opf & REQ_PREFLUSH))
  6146			ctx->do_flush = true;
  6147	
  6148		stripe_cnt = DIV_ROUND_UP_SECTOR_T(ctx->last_sector - logical_sector,
  6149						   RAID5_STRIPE_SECTORS(conf));
  6150		bitmap_set(ctx->sectors_to_do, 0, stripe_cnt);
  6151	
  6152		pr_debug("raid456: %s, logical %llu to %llu\n", __func__,
  6153			 bi->bi_iter.bi_sector, ctx->last_sector);
  6154	
  6155		/* Bail out if conflicts with reshape and REQ_NOWAIT is set */
  6156		if ((bi->bi_opf & REQ_NOWAIT) &&
  6157		    get_reshape_loc(mddev, conf, logical_sector) == LOC_INSIDE_RESHAPE) {
  6158			bio_wouldblock_error(bi);
  6159			if (rw == WRITE)
  6160				md_write_end(mddev);
  6161			mempool_free(ctx, conf->ctx_pool);
  6162			return true;
  6163		}
  6164		md_account_bio(mddev, &bi);
  6165	
  6166		/*
  6167		 * Lets start with the stripe with the lowest chunk offset in the first
  6168		 * chunk. That has the best chances of creating IOs adjacent to
  6169		 * previous IOs in case of sequential IO and thus creates the most
  6170		 * sequential IO pattern. We don't bother with the optimization when
  6171		 * reshaping as the performance benefit is not worth the complexity.
  6172		 */
  6173		if (likely(conf->reshape_progress == MaxSector)) {
  6174			logical_sector = raid5_bio_lowest_chunk_sector(conf, bi);
  6175			on_wq = false;
  6176		} else {
  6177			add_wait_queue(&conf->wait_for_reshape, &wait);
  6178			on_wq = true;
  6179		}
  6180		s = (logical_sector - ctx->first_sector) >> RAID5_STRIPE_SHIFT(conf);
  6181	
  6182		while (1) {
  6183			res = make_stripe_request(mddev, conf, ctx, logical_sector,
  6184						  bi);
  6185			if (res == STRIPE_FAIL || res == STRIPE_WAIT_RESHAPE)
  6186				break;
  6187	
  6188			if (res == STRIPE_RETRY)
  6189				continue;
  6190	
  6191			if (res == STRIPE_SCHEDULE_AND_RETRY) {
  6192				WARN_ON_ONCE(!on_wq);
  6193				/*
  6194				 * Must release the reference to batch_last before
  6195				 * scheduling and waiting for work to be done,
  6196				 * otherwise the batch_last stripe head could prevent
  6197				 * raid5_activate_delayed() from making progress
  6198				 * and thus deadlocking.
  6199				 */
  6200				if (ctx->batch_last) {
  6201					raid5_release_stripe(ctx->batch_last);
  6202					ctx->batch_last = NULL;
  6203				}
  6204	
  6205				wait_woken(&wait, TASK_UNINTERRUPTIBLE,
  6206					   MAX_SCHEDULE_TIMEOUT);
  6207				continue;
  6208			}
  6209	
  6210			s = find_next_bit_wrap(ctx->sectors_to_do, stripe_cnt, s);
  6211			if (s == stripe_cnt)
  6212				break;
  6213	
  6214			logical_sector = ctx->first_sector +
  6215				(s << RAID5_STRIPE_SHIFT(conf));
  6216		}
  6217		if (unlikely(on_wq))
  6218			remove_wait_queue(&conf->wait_for_reshape, &wait);
  6219	
  6220		if (ctx->batch_last)
  6221			raid5_release_stripe(ctx->batch_last);
  6222	
  6223		if (rw == WRITE)
  6224			md_write_end(mddev);
  6225	
  6226		mempool_free(ctx, conf->ctx_pool);
  6227		if (res == STRIPE_WAIT_RESHAPE) {
  6228			md_free_cloned_bio(bi);
  6229			return false;
  6230		}
  6231	
  6232		bio_endio(bi);
  6233		return true;
  6234	}
  6235	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH] md/raid5: split reshape bios before bitmap accounting
  2026-04-19  3:09 ` [PATCH] md/raid5: split reshape bios before bitmap accounting Yu Kuai
  2026-04-30  0:59   ` kernel test robot
  2026-04-30  4:07   ` kernel test robot
@ 2026-04-30 19:48   ` kernel test robot
  2 siblings, 0 replies; 25+ messages in thread
From: kernel test robot @ 2026-04-30 19:48 UTC (permalink / raw)
  To: Yu Kuai, linux-raid
  Cc: oe-kbuild-all, linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Hi Yu,

kernel test robot noticed the following build warnings:

[auto build test WARNING on linus/master]
[also build test WARNING on song-md/md-next v7.1-rc1 next-20260429]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Yu-Kuai/md-raid5-split-reshape-bios-before-bitmap-accounting/20260425-083941
base:   linus/master
patch link:    https://lore.kernel.org/r/20260419030942.824195-20-yukuai%40fnnas.com
patch subject: [PATCH] md/raid5: split reshape bios before bitmap accounting
config: arc-randconfig-001 (https://download.01.org/0day-ci/archive/20260501/202605010352.oO1uCwCR-lkp@intel.com/config)
compiler: arc-linux-gcc (GCC) 8.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260501/202605010352.oO1uCwCR-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202605010352.oO1uCwCR-lkp@intel.com/

All warnings (new ones prefixed by >>):

   drivers/md/raid5.c: In function 'raid5_make_request':
   drivers/md/raid5.c:6126:7: error: implicit declaration of function 'mddev_bio_split_at_reshape_offset' [-Werror=implicit-function-declaration]
     bi = mddev_bio_split_at_reshape_offset(mddev, bi, NULL,
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/md/raid5.c:6126:5: warning: assignment to 'struct bio *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
     bi = mddev_bio_split_at_reshape_offset(mddev, bi, NULL,
        ^
   cc1: some warnings being treated as errors


vim +6126 drivers/md/raid5.c

  6083	
  6084	static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
  6085	{
  6086		DEFINE_WAIT_FUNC(wait, woken_wake_function);
  6087		struct r5conf *conf = mddev->private;
  6088		const int rw = bio_data_dir(bi);
  6089		struct stripe_request_ctx *ctx;
  6090		sector_t logical_sector;
  6091		enum stripe_result res;
  6092		int s, stripe_cnt;
  6093		bool on_wq;
  6094	
  6095		if (unlikely(bi->bi_opf & REQ_PREFLUSH)) {
  6096			int ret = log_handle_flush_request(conf, bi);
  6097	
  6098			if (ret == 0)
  6099				return true;
  6100			if (ret == -ENODEV) {
  6101				if (md_flush_request(mddev, bi))
  6102					return true;
  6103			}
  6104			/* ret == -EAGAIN, fallback */
  6105		}
  6106	
  6107		md_write_start(mddev, bi);
  6108		/*
  6109		 * If array is degraded, better not do chunk aligned read because
  6110		 * later we might have to read it again in order to reconstruct
  6111		 * data on failed drives.
  6112		 */
  6113		if (rw == READ && mddev->degraded == 0 &&
  6114		    mddev->reshape_position == MaxSector) {
  6115			bi = chunk_aligned_read(mddev, bi);
  6116			if (!bi)
  6117				return true;
  6118		}
  6119	
  6120		if (unlikely(bio_op(bi) == REQ_OP_DISCARD)) {
  6121			make_discard_request(mddev, bi);
  6122			md_write_end(mddev);
  6123			return true;
  6124		}
  6125	
> 6126		bi = mddev_bio_split_at_reshape_offset(mddev, bi, NULL,
  6127						       &conf->bio_split);
  6128		if (!bi) {
  6129			if (rw == WRITE)
  6130				md_write_end(mddev);
  6131			return true;
  6132		}
  6133	
  6134		logical_sector = bi->bi_iter.bi_sector & ~((sector_t)RAID5_STRIPE_SECTORS(conf)-1);
  6135		bi->bi_next = NULL;
  6136	
  6137		ctx = mempool_alloc(conf->ctx_pool, GFP_NOIO);
  6138		memset(ctx, 0, conf->ctx_size);
  6139		ctx->first_sector = logical_sector;
  6140		ctx->last_sector = bio_end_sector(bi);
  6141		/*
  6142		 * if r5l_handle_flush_request() didn't clear REQ_PREFLUSH,
  6143		 * we need to flush journal device
  6144		 */
  6145		if (unlikely(bi->bi_opf & REQ_PREFLUSH))
  6146			ctx->do_flush = true;
  6147	
  6148		stripe_cnt = DIV_ROUND_UP_SECTOR_T(ctx->last_sector - logical_sector,
  6149						   RAID5_STRIPE_SECTORS(conf));
  6150		bitmap_set(ctx->sectors_to_do, 0, stripe_cnt);
  6151	
  6152		pr_debug("raid456: %s, logical %llu to %llu\n", __func__,
  6153			 bi->bi_iter.bi_sector, ctx->last_sector);
  6154	
  6155		/* Bail out if conflicts with reshape and REQ_NOWAIT is set */
  6156		if ((bi->bi_opf & REQ_NOWAIT) &&
  6157		    get_reshape_loc(mddev, conf, logical_sector) == LOC_INSIDE_RESHAPE) {
  6158			bio_wouldblock_error(bi);
  6159			if (rw == WRITE)
  6160				md_write_end(mddev);
  6161			mempool_free(ctx, conf->ctx_pool);
  6162			return true;
  6163		}
  6164		md_account_bio(mddev, &bi);
  6165	
  6166		/*
  6167		 * Lets start with the stripe with the lowest chunk offset in the first
  6168		 * chunk. That has the best chances of creating IOs adjacent to
  6169		 * previous IOs in case of sequential IO and thus creates the most
  6170		 * sequential IO pattern. We don't bother with the optimization when
  6171		 * reshaping as the performance benefit is not worth the complexity.
  6172		 */
  6173		if (likely(conf->reshape_progress == MaxSector)) {
  6174			logical_sector = raid5_bio_lowest_chunk_sector(conf, bi);
  6175			on_wq = false;
  6176		} else {
  6177			add_wait_queue(&conf->wait_for_reshape, &wait);
  6178			on_wq = true;
  6179		}
  6180		s = (logical_sector - ctx->first_sector) >> RAID5_STRIPE_SHIFT(conf);
  6181	
  6182		while (1) {
  6183			res = make_stripe_request(mddev, conf, ctx, logical_sector,
  6184						  bi);
  6185			if (res == STRIPE_FAIL || res == STRIPE_WAIT_RESHAPE)
  6186				break;
  6187	
  6188			if (res == STRIPE_RETRY)
  6189				continue;
  6190	
  6191			if (res == STRIPE_SCHEDULE_AND_RETRY) {
  6192				WARN_ON_ONCE(!on_wq);
  6193				/*
  6194				 * Must release the reference to batch_last before
  6195				 * scheduling and waiting for work to be done,
  6196				 * otherwise the batch_last stripe head could prevent
  6197				 * raid5_activate_delayed() from making progress
  6198				 * and thus deadlocking.
  6199				 */
  6200				if (ctx->batch_last) {
  6201					raid5_release_stripe(ctx->batch_last);
  6202					ctx->batch_last = NULL;
  6203				}
  6204	
  6205				wait_woken(&wait, TASK_UNINTERRUPTIBLE,
  6206					   MAX_SCHEDULE_TIMEOUT);
  6207				continue;
  6208			}
  6209	
  6210			s = find_next_bit_wrap(ctx->sectors_to_do, stripe_cnt, s);
  6211			if (s == stripe_cnt)
  6212				break;
  6213	
  6214			logical_sector = ctx->first_sector +
  6215				(s << RAID5_STRIPE_SHIFT(conf));
  6216		}
  6217		if (unlikely(on_wq))
  6218			remove_wait_queue(&conf->wait_for_reshape, &wait);
  6219	
  6220		if (ctx->batch_last)
  6221			raid5_release_stripe(ctx->batch_last);
  6222	
  6223		if (rw == WRITE)
  6224			md_write_end(mddev);
  6225	
  6226		mempool_free(ctx, conf->ctx_pool);
  6227		if (res == STRIPE_WAIT_RESHAPE) {
  6228			md_free_cloned_bio(bi);
  6229			return false;
  6230		}
  6231	
  6232		bio_endio(bi);
  6233		return true;
  6234	}
  6235	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH] md/raid5: add exact old and new llbitmap mapping helpers
  2026-04-19  3:09 ` [PATCH] md/raid5: add exact old and new llbitmap mapping helpers Yu Kuai
@ 2026-05-01 18:51   ` kernel test robot
  0 siblings, 0 replies; 25+ messages in thread
From: kernel test robot @ 2026-05-01 18:51 UTC (permalink / raw)
  To: Yu Kuai, linux-raid
  Cc: llvm, oe-kbuild-all, linux-kernel, Li Nan, Yu Kuai, Cheng Cheng

Hi Yu,

kernel test robot noticed the following build errors:

[auto build test ERROR on linus/master]
[also build test ERROR on v7.1-rc1 next-20260430]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Yu-Kuai/md-raid5-add-exact-old-and-new-llbitmap-mapping-helpers/20260421-233709
base:   linus/master
patch link:    https://lore.kernel.org/r/20260419030942.824195-17-yukuai%40fnnas.com
patch subject: [PATCH] md/raid5: add exact old and new llbitmap mapping helpers
config: hexagon-allmodconfig (https://download.01.org/0day-ci/archive/20260502/202605020242.1lRKHrkP-lkp@intel.com/config)
compiler: clang version 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260502/202605020242.1lRKHrkP-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202605020242.1lRKHrkP-lkp@intel.com/

All errors (new ones prefixed by >>):

>> drivers/md/raid5.c:9065:3: error: field designator 'bitmap_sector_map' does not refer to any field in type 'struct md_personality'; did you mean 'bitmap_sector'?
    9065 |         .bitmap_sector_map = raid5_bitmap_sector_map,
         |          ^~~~~~~~~~~~~~~~~
         |          bitmap_sector
   drivers/md/md.h:797:9: note: 'bitmap_sector' declared here
     797 |         void (*bitmap_sector)(struct mddev *mddev, sector_t *offset,
         |                ^
>> drivers/md/raid5.c:9065:23: error: incompatible function pointer types initializing 'void (*)(struct mddev *, sector_t *, unsigned long *)' (aka 'void (*)(struct mddev *, unsigned long long *, unsigned long *)') with an expression of type 'void (struct mddev *, sector_t *, unsigned long *, bool)' (aka 'void (struct mddev *, unsigned long long *, unsigned long *, _Bool)') [-Wincompatible-function-pointer-types]
    9065 |         .bitmap_sector_map = raid5_bitmap_sector_map,
         |                              ^~~~~~~~~~~~~~~~~~~~~~~
   drivers/md/raid5.c:9065:23: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
    9065 |         .bitmap_sector_map = raid5_bitmap_sector_map,
         |                              ^~~~~~~~~~~~~~~~~~~~~~~
   drivers/md/raid5.c:9064:19: note: previous initialization is here
    9064 |         .bitmap_sector  = raid5_bitmap_sector,
         |                           ^~~~~~~~~~~~~~~~~~~
>> drivers/md/raid5.c:9066:3: error: field designator 'bitmap_sync_size' does not refer to any field in type 'struct md_personality'
    9066 |         .bitmap_sync_size = raid5_bitmap_sync_size,
         |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/md/raid5.c:9067:3: error: field designator 'bitmap_array_sectors' does not refer to any field in type 'struct md_personality'
    9067 |         .bitmap_array_sectors = raid5_bitmap_array_sectors,
         |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/md/raid5.c:9098:3: error: field designator 'bitmap_sector_map' does not refer to any field in type 'struct md_personality'; did you mean 'bitmap_sector'?
    9098 |         .bitmap_sector_map = raid5_bitmap_sector_map,
         |          ^~~~~~~~~~~~~~~~~
         |          bitmap_sector
   drivers/md/md.h:797:9: note: 'bitmap_sector' declared here
     797 |         void (*bitmap_sector)(struct mddev *mddev, sector_t *offset,
         |                ^
   drivers/md/raid5.c:9098:23: error: incompatible function pointer types initializing 'void (*)(struct mddev *, sector_t *, unsigned long *)' (aka 'void (*)(struct mddev *, unsigned long long *, unsigned long *)') with an expression of type 'void (struct mddev *, sector_t *, unsigned long *, bool)' (aka 'void (struct mddev *, unsigned long long *, unsigned long *, _Bool)') [-Wincompatible-function-pointer-types]
    9098 |         .bitmap_sector_map = raid5_bitmap_sector_map,
         |                              ^~~~~~~~~~~~~~~~~~~~~~~
   drivers/md/raid5.c:9098:23: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
    9098 |         .bitmap_sector_map = raid5_bitmap_sector_map,
         |                              ^~~~~~~~~~~~~~~~~~~~~~~
   drivers/md/raid5.c:9097:19: note: previous initialization is here
    9097 |         .bitmap_sector  = raid5_bitmap_sector,
         |                           ^~~~~~~~~~~~~~~~~~~
   drivers/md/raid5.c:9099:3: error: field designator 'bitmap_sync_size' does not refer to any field in type 'struct md_personality'
    9099 |         .bitmap_sync_size = raid5_bitmap_sync_size,
         |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/md/raid5.c:9100:3: error: field designator 'bitmap_array_sectors' does not refer to any field in type 'struct md_personality'
    9100 |         .bitmap_array_sectors = raid5_bitmap_array_sectors,
         |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/md/raid5.c:9132:3: error: field designator 'bitmap_sector_map' does not refer to any field in type 'struct md_personality'; did you mean 'bitmap_sector'?
    9132 |         .bitmap_sector_map = raid5_bitmap_sector_map,
         |          ^~~~~~~~~~~~~~~~~
         |          bitmap_sector
   drivers/md/md.h:797:9: note: 'bitmap_sector' declared here
     797 |         void (*bitmap_sector)(struct mddev *mddev, sector_t *offset,
         |                ^
   drivers/md/raid5.c:9132:23: error: incompatible function pointer types initializing 'void (*)(struct mddev *, sector_t *, unsigned long *)' (aka 'void (*)(struct mddev *, unsigned long long *, unsigned long *)') with an expression of type 'void (struct mddev *, sector_t *, unsigned long *, bool)' (aka 'void (struct mddev *, unsigned long long *, unsigned long *, _Bool)') [-Wincompatible-function-pointer-types]
    9132 |         .bitmap_sector_map = raid5_bitmap_sector_map,
         |                              ^~~~~~~~~~~~~~~~~~~~~~~
   drivers/md/raid5.c:9132:23: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
    9132 |         .bitmap_sector_map = raid5_bitmap_sector_map,
         |                              ^~~~~~~~~~~~~~~~~~~~~~~
   drivers/md/raid5.c:9131:19: note: previous initialization is here
    9131 |         .bitmap_sector  = raid5_bitmap_sector,
         |                           ^~~~~~~~~~~~~~~~~~~
   drivers/md/raid5.c:9133:3: error: field designator 'bitmap_sync_size' does not refer to any field in type 'struct md_personality'
    9133 |         .bitmap_sync_size = raid5_bitmap_sync_size,
         |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   drivers/md/raid5.c:9134:3: error: field designator 'bitmap_array_sectors' does not refer to any field in type 'struct md_personality'
    9134 |         .bitmap_array_sectors = raid5_bitmap_array_sectors,
         |         ~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   3 warnings and 12 errors generated.


vim +9065 drivers/md/raid5.c

  9035	
  9036	static struct md_personality raid6_personality =
  9037	{
  9038		.head = {
  9039			.type	= MD_PERSONALITY,
  9040			.id	= ID_RAID6,
  9041			.name	= "raid6",
  9042			.owner	= THIS_MODULE,
  9043		},
  9044	
  9045		.make_request	= raid5_make_request,
  9046		.run		= raid5_run,
  9047		.start		= raid5_start,
  9048		.free		= raid5_free,
  9049		.status		= raid5_status,
  9050		.error_handler	= raid5_error,
  9051		.hot_add_disk	= raid5_add_disk,
  9052		.hot_remove_disk= raid5_remove_disk,
  9053		.spare_active	= raid5_spare_active,
  9054		.sync_request	= raid5_sync_request,
  9055		.resize		= raid5_resize,
  9056		.size		= raid5_size,
  9057		.check_reshape	= raid6_check_reshape,
  9058		.start_reshape  = raid5_start_reshape,
  9059		.finish_reshape = raid5_finish_reshape,
  9060		.quiesce	= raid5_quiesce,
  9061		.takeover	= raid6_takeover,
  9062		.change_consistency_policy = raid5_change_consistency_policy,
  9063		.prepare_suspend = raid5_prepare_suspend,
  9064		.bitmap_sector	= raid5_bitmap_sector,
> 9065		.bitmap_sector_map = raid5_bitmap_sector_map,
> 9066		.bitmap_sync_size = raid5_bitmap_sync_size,
> 9067		.bitmap_array_sectors = raid5_bitmap_array_sectors,
  9068	};
  9069	static struct md_personality raid5_personality =
  9070	{
  9071		.head = {
  9072			.type	= MD_PERSONALITY,
  9073			.id	= ID_RAID5,
  9074			.name	= "raid5",
  9075			.owner	= THIS_MODULE,
  9076		},
  9077	
  9078		.make_request	= raid5_make_request,
  9079		.run		= raid5_run,
  9080		.start		= raid5_start,
  9081		.free		= raid5_free,
  9082		.status		= raid5_status,
  9083		.error_handler	= raid5_error,
  9084		.hot_add_disk	= raid5_add_disk,
  9085		.hot_remove_disk= raid5_remove_disk,
  9086		.spare_active	= raid5_spare_active,
  9087		.sync_request	= raid5_sync_request,
  9088		.resize		= raid5_resize,
  9089		.size		= raid5_size,
  9090		.check_reshape	= raid5_check_reshape,
  9091		.start_reshape  = raid5_start_reshape,
  9092		.finish_reshape = raid5_finish_reshape,
  9093		.quiesce	= raid5_quiesce,
  9094		.takeover	= raid5_takeover,
  9095		.change_consistency_policy = raid5_change_consistency_policy,
  9096		.prepare_suspend = raid5_prepare_suspend,
  9097		.bitmap_sector	= raid5_bitmap_sector,
  9098		.bitmap_sector_map = raid5_bitmap_sector_map,
  9099		.bitmap_sync_size = raid5_bitmap_sync_size,
  9100		.bitmap_array_sectors = raid5_bitmap_array_sectors,
  9101	};
  9102	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2026-05-01 18:52 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-19  3:09 [PATCH 00/19] md: support llbitmap reshape for raid10 and raid5 Yu Kuai
2026-04-19  3:09 ` [PATCH] md: add exact bitmap mapping and reshape hooks Yu Kuai
2026-04-19  3:09 ` [PATCH] md: add helper to split bios at reshape offset Yu Kuai
2026-04-19  3:09 ` [PATCH] md/md-llbitmap: track bitmap sync_size explicitly Yu Kuai
2026-04-19  3:09 ` [PATCH] md/md-llbitmap: allocate page controls independently Yu Kuai
2026-04-19  3:09 ` [PATCH] md/md-llbitmap: grow the page cache in place for reshape Yu Kuai
2026-04-19  3:09 ` [PATCH] md/md-llbitmap: track target reshape geometry fields Yu Kuai
2026-04-19  3:09 ` [PATCH] md/md-llbitmap: finish reshape geometry Yu Kuai
2026-04-19  3:09 ` [PATCH] md/md-llbitmap: refuse reshape while llbitmap still needs sync Yu Kuai
2026-04-19  3:09 ` [PATCH] md/md-llbitmap: add reshape range mapping helpers Yu Kuai
2026-04-19  3:09 ` [PATCH] md/md-llbitmap: don't skip reshape ranges from bitmap state Yu Kuai
2026-04-19  3:09 ` [PATCH] md/md-llbitmap: remap checkpointed bits as reshape progresses Yu Kuai
2026-04-19  3:09 ` [PATCH] md/md-llbitmap: clamp state-machine walks to tracked bits Yu Kuai
2026-04-19  3:09 ` [PATCH] md/raid10: reject llbitmap chunk shrink during reshape Yu Kuai
2026-04-19  3:09 ` [PATCH] md/raid10: wire llbitmap reshape lifecycle Yu Kuai
2026-04-30  2:37   ` kernel test robot
2026-04-19  3:09 ` [PATCH] md/raid10: split reshape bios before bitmap accounting Yu Kuai
2026-04-19  3:09 ` [PATCH] md/raid5: add exact old and new llbitmap mapping helpers Yu Kuai
2026-05-01 18:51   ` kernel test robot
2026-04-19  3:09 ` [PATCH] md/raid5: reject llbitmap chunk shrink during reshape Yu Kuai
2026-04-19  3:09 ` [PATCH] md/raid5: wire llbitmap reshape lifecycle Yu Kuai
2026-04-19  3:09 ` [PATCH] md/raid5: split reshape bios before bitmap accounting Yu Kuai
2026-04-30  0:59   ` kernel test robot
2026-04-30  4:07   ` kernel test robot
2026-04-30 19:48   ` kernel test robot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox