* [PATCH] md/md-bitmap: fix wrong bitmap_limit for clustermd when write sb
@ 2025-03-02 4:34 Su Yue
2025-03-02 4:40 ` Glass Su
0 siblings, 1 reply; 2+ messages in thread
From: Su Yue @ 2025-03-02 4:34 UTC (permalink / raw)
To: linux-raid; +Cc: hch, ofir.gal, heming.zhao, yukuai3, l, Su Yue
In clustermd, Separate write-intent-bitmaps are used for each cluster
node:
0 4k 8k 12k
-------------------------------------------------------------------
| idle | md super | bm super [0] + bits |
| bm bits[0, contd] | bm super[1] + bits | bm bits[1, contd] |
| bm super[2] + bits | bm bits [2, contd] | bm super[3] + bits |
| bm bits [3, contd] | | |
So in node 1, pg_index in __write_sb_page() could equal to
bitmap->storage.file_pages. Then bitmap_limit will be calculated to
0. md_super_write() will be called with 0 size.
That means node the first 4k sb area of node 1 will never be updated
through filemap_write_page().
This bug causes hang of mdadm/clustermd_tests/01r1_Grow_resize.
Here use (pg_index % num_pages) to calculate bitmap_limit to fix it.
Fixes: ab99a87542f1 ("md/md-bitmap: fix writing non bitmap pages")
Signed-off-by: Su Yue <glass.su@suse.com>
---
drivers/md/md-bitmap.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
index 23c09d22fcdb..e055cfac318c 100644
--- a/drivers/md/md-bitmap.c
+++ b/drivers/md/md-bitmap.c
@@ -426,8 +426,8 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
struct block_device *bdev;
struct mddev *mddev = bitmap->mddev;
struct bitmap_storage *store = &bitmap->storage;
- unsigned int bitmap_limit = (bitmap->storage.file_pages - pg_index) <<
- PAGE_SHIFT;
+ unsigned long num_pages = bitmap->storage.file_pages;
+ unsigned int bitmap_limit = (num_pages - pg_index % num_pages) << PAGE_SHIFT;
loff_t sboff, offset = mddev->bitmap_info.offset;
sector_t ps = pg_index * PAGE_SIZE / SECTOR_SIZE;
unsigned int size = PAGE_SIZE;
@@ -436,7 +436,7 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
bdev = (rdev->meta_bdev) ? rdev->meta_bdev : rdev->bdev;
/* we compare length (page numbers), not page offset. */
- if ((pg_index - store->sb_index) == store->file_pages - 1) {
+ if ((pg_index - store->sb_index) == num_pages - 1) {
unsigned int last_page_size = store->bytes & (PAGE_SIZE - 1);
if (last_page_size == 0)
@@ -472,7 +472,7 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
return -EINVAL;
}
- md_super_write(mddev, rdev, sboff + ps, (int)min(size, bitmap_limit), page);
+ md_super_write(mddev, rdev, sboff + ps, min(size, bitmap_limit), page);
return 0;
}
--
2.47.1
^ permalink raw reply related [flat|nested] 2+ messages in thread* Re: [PATCH] md/md-bitmap: fix wrong bitmap_limit for clustermd when write sb
2025-03-02 4:34 [PATCH] md/md-bitmap: fix wrong bitmap_limit for clustermd when write sb Su Yue
@ 2025-03-02 4:40 ` Glass Su
0 siblings, 0 replies; 2+ messages in thread
From: Glass Su @ 2025-03-02 4:40 UTC (permalink / raw)
To: linux-raid; +Cc: hch, ofir.gal, Heming Zhao, yukuai3, Su Yue
> On Mar 2, 2025, at 12:34, Su Yue <glass.su@suse.com> wrote:
>
> In clustermd, Separate write-intent-bitmaps are used for each cluster
> node:
>
> 0 4k 8k 12k
> -------------------------------------------------------------------
> | idle | md super | bm super [0] + bits |
> | bm bits[0, contd] | bm super[1] + bits | bm bits[1, contd] |
> | bm super[2] + bits | bm bits [2, contd] | bm super[3] + bits |
> | bm bits [3, contd] | | |
>
> So in node 1, pg_index in __write_sb_page() could equal to
> bitmap->storage.file_pages. Then bitmap_limit will be calculated to
> 0. md_super_write() will be called with 0 size.
> That means node the first 4k sb area of node 1 will never be updated
> through filemap_write_page().
> This bug causes hang of mdadm/clustermd_tests/01r1_Grow_resize.
>
> Here use (pg_index % num_pages) to calculate bitmap_limit to fix it.
>
> Fixes: ab99a87542f1 ("md/md-bitmap: fix writing non bitmap pages")
> Signed-off-by: Su Yue <glass.su@suse.com>
> ---
> drivers/md/md-bitmap.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
> index 23c09d22fcdb..e055cfac318c 100644
> --- a/drivers/md/md-bitmap.c
> +++ b/drivers/md/md-bitmap.c
> @@ -426,8 +426,8 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
> struct block_device *bdev;
> struct mddev *mddev = bitmap->mddev;
> struct bitmap_storage *store = &bitmap->storage;
> - unsigned int bitmap_limit = (bitmap->storage.file_pages - pg_index) <<
> - PAGE_SHIFT;
> + unsigned long num_pages = bitmap->storage.file_pages;
> + unsigned int bitmap_limit = (num_pages - pg_index % num_pages) << PAGE_SHIFT;
> loff_t sboff, offset = mddev->bitmap_info.offset;
> sector_t ps = pg_index * PAGE_SIZE / SECTOR_SIZE;
> unsigned int size = PAGE_SIZE;
> @@ -436,7 +436,7 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
>
> bdev = (rdev->meta_bdev) ? rdev->meta_bdev : rdev->bdev;
> /* we compare length (page numbers), not page offset. */
> - if ((pg_index - store->sb_index) == store->file_pages - 1) {
> + if ((pg_index - store->sb_index) == num_pages - 1) {
> unsigned int last_page_size = store->bytes & (PAGE_SIZE - 1);
>
> if (last_page_size == 0)
> @@ -472,7 +472,7 @@ static int __write_sb_page(struct md_rdev *rdev, struct bitmap *bitmap,
> return -EINVAL;
> }
>
> - md_super_write(mddev, rdev, sboff + ps, (int)min(size, bitmap_limit), page);
> + md_super_write(mddev, rdev, sboff + ps, min(size, bitmap_limit), page);
Unintended change. Fixed in v2.
> return 0;
> }
>
> --
> 2.47.1
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-03-02 4:40 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-02 4:34 [PATCH] md/md-bitmap: fix wrong bitmap_limit for clustermd when write sb Su Yue
2025-03-02 4:40 ` Glass Su
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox