From: linan666@huaweicloud.com
To: song@kernel.org, yukuai@fnnas.com
Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org,
linan666@huaweicloud.com, yangerkun@huawei.com,
yi.zhang@huawei.com
Subject: [PATCH v3 4/8] md/raid1: use folio for tmppage
Date: Thu, 16 Apr 2026 11:37:57 +0800 [thread overview]
Message-ID: <20260416033801.786415-5-linan666@huaweicloud.com> (raw)
In-Reply-To: <20260416033801.786415-1-linan666@huaweicloud.com>
From: Li Nan <linan122@huawei.com>
Convert tmppage to tmpfolio and use it throughout in raid1.
Signed-off-by: Li Nan <linan122@huawei.com>
Reviewed-by: Xiao Ni <xni@redhat.com>
---
drivers/md/raid1.h | 2 +-
drivers/md/raid1.c | 18 ++++++++++--------
2 files changed, 11 insertions(+), 9 deletions(-)
diff --git a/drivers/md/raid1.h b/drivers/md/raid1.h
index c98d43a7ae99..d480b3a8c2c4 100644
--- a/drivers/md/raid1.h
+++ b/drivers/md/raid1.h
@@ -101,7 +101,7 @@ struct r1conf {
/* temporary buffer to synchronous IO when attempting to repair
* a read error.
*/
- struct page *tmppage;
+ struct folio *tmpfolio;
/* When taking over an array from a different personality, we store
* the new thread here until we fully activate the array.
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 5a73a9f19e0e..a72abdc37a2d 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -2417,8 +2417,8 @@ static void fix_read_error(struct r1conf *conf, struct r1bio *r1_bio)
rdev->recovery_offset >= sect + s)) &&
rdev_has_badblock(rdev, sect, s) == 0) {
atomic_inc(&rdev->nr_pending);
- if (sync_page_io(rdev, sect, s<<9,
- conf->tmppage, REQ_OP_READ, false))
+ if (sync_folio_io(rdev, sect, s<<9, 0,
+ conf->tmpfolio, REQ_OP_READ, false))
success = 1;
rdev_dec_pending(rdev, mddev);
if (success)
@@ -2447,7 +2447,8 @@ static void fix_read_error(struct r1conf *conf, struct r1bio *r1_bio)
!test_bit(Faulty, &rdev->flags)) {
atomic_inc(&rdev->nr_pending);
r1_sync_page_io(rdev, sect, s,
- conf->tmppage, REQ_OP_WRITE);
+ folio_page(conf->tmpfolio, 0),
+ REQ_OP_WRITE);
rdev_dec_pending(rdev, mddev);
}
}
@@ -2461,7 +2462,8 @@ static void fix_read_error(struct r1conf *conf, struct r1bio *r1_bio)
!test_bit(Faulty, &rdev->flags)) {
atomic_inc(&rdev->nr_pending);
if (r1_sync_page_io(rdev, sect, s,
- conf->tmppage, REQ_OP_READ)) {
+ folio_page(conf->tmpfolio, 0),
+ REQ_OP_READ)) {
atomic_add(s, &rdev->corrected_errors);
pr_info("md/raid1:%s: read error corrected (%d sectors at %llu on %pg)\n",
mdname(mddev), s,
@@ -3099,8 +3101,8 @@ static struct r1conf *setup_conf(struct mddev *mddev)
if (!conf->mirrors)
goto abort;
- conf->tmppage = alloc_page(GFP_KERNEL);
- if (!conf->tmppage)
+ conf->tmpfolio = folio_alloc(GFP_KERNEL, 0);
+ if (!conf->tmpfolio)
goto abort;
r1bio_size = offsetof(struct r1bio, bios[mddev->raid_disks * 2]);
@@ -3175,7 +3177,7 @@ static struct r1conf *setup_conf(struct mddev *mddev)
if (conf) {
mempool_destroy(conf->r1bio_pool);
kfree(conf->mirrors);
- safe_put_page(conf->tmppage);
+ safe_folio_put(conf->tmpfolio);
kfree(conf->nr_pending);
kfree(conf->nr_waiting);
kfree(conf->nr_queued);
@@ -3290,7 +3292,7 @@ static void raid1_free(struct mddev *mddev, void *priv)
mempool_destroy(conf->r1bio_pool);
kfree(conf->mirrors);
- safe_put_page(conf->tmppage);
+ safe_folio_put(conf->tmpfolio);
kfree(conf->nr_pending);
kfree(conf->nr_waiting);
kfree(conf->nr_queued);
--
2.39.2
next prev parent reply other threads:[~2026-04-16 3:50 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-16 3:37 [PATCH v3 0/8] folio support for sync I/O in RAID linan666
2026-04-16 3:37 ` [PATCH v3 1/8] md/raid1,raid10: clean up of RESYNC_SECTORS linan666
2026-04-16 3:37 ` [PATCH v3 2/8] md: introduce sync_folio_io for folio support in RAID linan666
2026-04-16 3:37 ` [PATCH v3 3/8] md: introduce safe_put_folio " linan666
2026-04-16 3:37 ` linan666 [this message]
2026-04-16 3:37 ` [PATCH v3 5/8] md/raid10: use folio for tmppage linan666
2026-04-16 3:37 ` [PATCH v3 6/8] md/raid1,raid10: use folio for sync path IO linan666
2026-04-16 3:38 ` [PATCH v3 7/8] md/raid1: fix IO error at logical block size granularity linan666
2026-04-16 3:38 ` [PATCH v3 8/8] md/raid10: " linan666
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260416033801.786415-5-linan666@huaweicloud.com \
--to=linan666@huaweicloud.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-raid@vger.kernel.org \
--cc=song@kernel.org \
--cc=yangerkun@huawei.com \
--cc=yi.zhang@huawei.com \
--cc=yukuai@fnnas.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox