* [PATCH v2 0/5] md: fix is_mddev_idle()
@ 2025-04-18 1:09 Yu Kuai
2025-04-18 1:09 ` [PATCH v2 1/5] block: cleanup and export bdev IO inflight APIs Yu Kuai
` (4 more replies)
0 siblings, 5 replies; 18+ messages in thread
From: Yu Kuai @ 2025-04-18 1:09 UTC (permalink / raw)
To: axboe, xni, agk, snitzer, mpatocka, song, yukuai3, viro, akpm,
nadav.amit, ubizjak, cl
Cc: linux-block, linux-kernel, dm-devel, linux-raid, yukuai1,
yi.zhang, yangerkun, johnny.chenyi
From: Yu Kuai <yukuai3@huawei.com>
Changes in v2:
- more cleanps in patch 1;
- add patch 2, to record dm-raid gendisk, and also handle dm-raid in
patch 4;
- keep the old commnts in patch 3;
Yu Kuai (5):
block: cleanup and export bdev IO inflight APIs
md: record dm-raid gendisk in mddev
md: add a new api sync_io_depth
md: fix is_mddev_idle()
md: cleanup accounting for issued sync IO
block/blk-core.c | 2 +-
block/blk-mq.c | 15 +--
block/blk-mq.h | 7 +-
block/blk.h | 1 -
block/genhd.c | 48 +++++-----
drivers/md/dm-raid.c | 3 +
drivers/md/md.c | 193 +++++++++++++++++++++++++++-----------
drivers/md/md.h | 18 +---
drivers/md/raid1.c | 3 -
drivers/md/raid10.c | 9 --
drivers/md/raid5.c | 8 --
include/linux/blkdev.h | 1 -
include/linux/part_stat.h | 10 ++
13 files changed, 184 insertions(+), 134 deletions(-)
--
2.39.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 1/5] block: cleanup and export bdev IO inflight APIs
2025-04-18 1:09 [PATCH v2 0/5] md: fix is_mddev_idle() Yu Kuai
@ 2025-04-18 1:09 ` Yu Kuai
2025-04-21 11:59 ` Christoph Hellwig
2025-04-18 1:09 ` [PATCH v2 2/5] md: record dm-raid gendisk in mddev Yu Kuai
` (3 subsequent siblings)
4 siblings, 1 reply; 18+ messages in thread
From: Yu Kuai @ 2025-04-18 1:09 UTC (permalink / raw)
To: axboe, xni, agk, snitzer, mpatocka, song, yukuai3, viro, akpm,
nadav.amit, ubizjak, cl
Cc: linux-block, linux-kernel, dm-devel, linux-raid, yukuai1,
yi.zhang, yangerkun, johnny.chenyi
From: Yu Kuai <yukuai3@huawei.com>
- rename part_in_{flight, flight_rw} and to bdev_count_{inflight,
inflight_rw}.
- export bdev_count_inflight_rw, to fix a problem in mdraid that foreground
IO can be starved by background sync IO in later patches.
- reuse bdev_count_inflight_rw for bdev_count_inflight, also add warning if
inflight is negative, means related driver is buggy.
- remove unused blk_mq_in_flight
- rename blk_mq_in_flight_rw to blk_mq_count_in_driver_rw, to distinguish
from bdev_count_inflight_rw.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
block/blk-core.c | 2 +-
block/blk-mq.c | 15 +++---------
block/blk-mq.h | 7 +++---
block/blk.h | 1 -
block/genhd.c | 48 +++++++++++++++++++--------------------
include/linux/part_stat.h | 10 ++++++++
6 files changed, 40 insertions(+), 43 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 4623de79effa..ead2072eb3bd 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1018,7 +1018,7 @@ void update_io_ticks(struct block_device *part, unsigned long now, bool end)
stamp = READ_ONCE(part->bd_stamp);
if (unlikely(time_after(now, stamp)) &&
likely(try_cmpxchg(&part->bd_stamp, &stamp, now)) &&
- (end || part_in_flight(part)))
+ (end || bdev_count_inflight(part)))
__part_stat_add(part, io_ticks, now - stamp);
if (bdev_is_partition(part)) {
diff --git a/block/blk-mq.c b/block/blk-mq.c
index c2697db59109..a2bd77601623 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -101,18 +101,9 @@ static bool blk_mq_check_inflight(struct request *rq, void *priv)
return true;
}
-unsigned int blk_mq_in_flight(struct request_queue *q,
- struct block_device *part)
-{
- struct mq_inflight mi = { .part = part };
-
- blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight, &mi);
-
- return mi.inflight[0] + mi.inflight[1];
-}
-
-void blk_mq_in_flight_rw(struct request_queue *q, struct block_device *part,
- unsigned int inflight[2])
+void blk_mq_count_in_driver_rw(struct request_queue *q,
+ struct block_device *part,
+ unsigned int inflight[2])
{
struct mq_inflight mi = { .part = part };
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 3011a78cf16a..3897522f014f 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -246,10 +246,9 @@ static inline bool blk_mq_hw_queue_mapped(struct blk_mq_hw_ctx *hctx)
return hctx->nr_ctx && hctx->tags;
}
-unsigned int blk_mq_in_flight(struct request_queue *q,
- struct block_device *part);
-void blk_mq_in_flight_rw(struct request_queue *q, struct block_device *part,
- unsigned int inflight[2]);
+void blk_mq_count_in_driver_rw(struct request_queue *q,
+ struct block_device *part,
+ unsigned int inflight[2]);
static inline void blk_mq_put_dispatch_budget(struct request_queue *q,
int budget_token)
diff --git a/block/blk.h b/block/blk.h
index 006e3be433d2..f476f233f195 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -418,7 +418,6 @@ void blk_apply_bdi_limits(struct backing_dev_info *bdi,
int blk_dev_init(void);
void update_io_ticks(struct block_device *part, unsigned long now, bool end);
-unsigned int part_in_flight(struct block_device *part);
static inline void req_set_nomerge(struct request_queue *q, struct request *req)
{
diff --git a/block/genhd.c b/block/genhd.c
index c2bd86cd09de..8329bd539be3 100644
--- a/block/genhd.c
+++ b/block/genhd.c
@@ -125,37 +125,35 @@ static void part_stat_read_all(struct block_device *part,
}
}
-unsigned int part_in_flight(struct block_device *part)
-{
- unsigned int inflight = 0;
- int cpu;
-
- for_each_possible_cpu(cpu) {
- inflight += part_stat_local_read_cpu(part, in_flight[0], cpu) +
- part_stat_local_read_cpu(part, in_flight[1], cpu);
- }
- if ((int)inflight < 0)
- inflight = 0;
-
- return inflight;
-}
-
-static void part_in_flight_rw(struct block_device *part,
- unsigned int inflight[2])
+/**
+ * bdev_count_inflight_rw - get the number of inflight IOs for a block device.
+ *
+ * @bdev: the block device.
+ * @inflight: return number of inflight IOs, @inflight[0] for read and
+ * @inflight[1] for write.
+ *
+ * For bio-based block device, IO is inflight from bdev_start_io_acct(), and for
+ * rq-based block device, IO is inflight from blk_account_io_start().
+ *
+ * Noted, for rq-based block device, use blk_mq_count_in_driver_rw() to get the
+ * number of requests issued to driver.
+ */
+void bdev_count_inflight_rw(struct block_device *bdev, unsigned int inflight[2])
{
int cpu;
inflight[0] = 0;
inflight[1] = 0;
for_each_possible_cpu(cpu) {
- inflight[0] += part_stat_local_read_cpu(part, in_flight[0], cpu);
- inflight[1] += part_stat_local_read_cpu(part, in_flight[1], cpu);
+ inflight[0] += part_stat_local_read_cpu(bdev, in_flight[0], cpu);
+ inflight[1] += part_stat_local_read_cpu(bdev, in_flight[1], cpu);
}
- if ((int)inflight[0] < 0)
+ if (WARN_ON_ONCE((int)inflight[0] < 0))
inflight[0] = 0;
- if ((int)inflight[1] < 0)
+ if (WARN_ON_ONCE((int)inflight[1] < 0))
inflight[1] = 0;
}
+EXPORT_SYMBOL_GPL(bdev_count_inflight_rw);
/*
* Can be deleted altogether. Later.
@@ -1005,7 +1003,7 @@ ssize_t part_stat_show(struct device *dev,
struct disk_stats stat;
unsigned int inflight;
- inflight = part_in_flight(bdev);
+ inflight = bdev_count_inflight(bdev);
if (inflight) {
part_stat_lock();
update_io_ticks(bdev, jiffies, true);
@@ -1050,9 +1048,9 @@ ssize_t part_inflight_show(struct device *dev, struct device_attribute *attr,
unsigned int inflight[2];
if (queue_is_mq(q))
- blk_mq_in_flight_rw(q, bdev, inflight);
+ blk_mq_count_in_driver_rw(q, bdev, inflight);
else
- part_in_flight_rw(bdev, inflight);
+ bdev_count_inflight_rw(bdev, inflight);
return sysfs_emit(buf, "%8u %8u\n", inflight[0], inflight[1]);
}
@@ -1307,7 +1305,7 @@ static int diskstats_show(struct seq_file *seqf, void *v)
if (bdev_is_partition(hd) && !bdev_nr_sectors(hd))
continue;
- inflight = part_in_flight(hd);
+ inflight = bdev_count_inflight(hd);
if (inflight) {
part_stat_lock();
update_io_ticks(hd, jiffies, true);
diff --git a/include/linux/part_stat.h b/include/linux/part_stat.h
index c5e9cac0575e..6248e59dc891 100644
--- a/include/linux/part_stat.h
+++ b/include/linux/part_stat.h
@@ -79,4 +79,14 @@ static inline void part_stat_set_all(struct block_device *part, int value)
#define part_stat_local_read_cpu(part, field, cpu) \
local_read(&(part_stat_get_cpu(part, field, cpu)))
+void bdev_count_inflight_rw(struct block_device *bdev, unsigned int inflight[2]);
+
+static inline unsigned int bdev_count_inflight(struct block_device *bdev)
+{
+ unsigned int inflight[2];
+
+ bdev_count_inflight_rw(bdev, inflight);
+
+ return inflight[0] + inflight[1];
+}
#endif /* _LINUX_PART_STAT_H */
--
2.39.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 2/5] md: record dm-raid gendisk in mddev
2025-04-18 1:09 [PATCH v2 0/5] md: fix is_mddev_idle() Yu Kuai
2025-04-18 1:09 ` [PATCH v2 1/5] block: cleanup and export bdev IO inflight APIs Yu Kuai
@ 2025-04-18 1:09 ` Yu Kuai
2025-04-22 6:00 ` Xiao Ni
2025-04-18 1:09 ` [PATCH v2 3/5] md: add a new api sync_io_depth Yu Kuai
` (2 subsequent siblings)
4 siblings, 1 reply; 18+ messages in thread
From: Yu Kuai @ 2025-04-18 1:09 UTC (permalink / raw)
To: axboe, xni, agk, snitzer, mpatocka, song, yukuai3, viro, akpm,
nadav.amit, ubizjak, cl
Cc: linux-block, linux-kernel, dm-devel, linux-raid, yukuai1,
yi.zhang, yangerkun, johnny.chenyi
From: Yu Kuai <yukuai3@huawei.com>
Following patch will use gendisk to check if there are normal IO
completed or inflight, to fix a problem in mdraid that foreground IO
can be starved by background sync IO in later patches.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
drivers/md/dm-raid.c | 3 +++
drivers/md/md.h | 3 ++-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 6adc55fd90d3..127138c61be5 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -14,6 +14,7 @@
#include "raid5.h"
#include "raid10.h"
#include "md-bitmap.h"
+#include "dm-core.h"
#include <linux/device-mapper.h>
@@ -3308,6 +3309,7 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
/* Disable/enable discard support on raid set. */
configure_discard_support(rs);
+ rs->md.dm_gendisk = ti->table->md->disk;
mddev_unlock(&rs->md);
return 0;
@@ -3327,6 +3329,7 @@ static void raid_dtr(struct dm_target *ti)
mddev_lock_nointr(&rs->md);
md_stop(&rs->md);
+ rs->md.dm_gendisk = NULL;
mddev_unlock(&rs->md);
if (work_pending(&rs->md.event_work))
diff --git a/drivers/md/md.h b/drivers/md/md.h
index 1cf00a04bcdd..9d55b4630077 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -404,7 +404,8 @@ struct mddev {
* are happening, so run/
* takeover/stop are not safe
*/
- struct gendisk *gendisk;
+ struct gendisk *gendisk; /* mdraid gendisk */
+ struct gendisk *dm_gendisk; /* dm-raid gendisk */
struct kobject kobj;
int hold_active;
--
2.39.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 3/5] md: add a new api sync_io_depth
2025-04-18 1:09 [PATCH v2 0/5] md: fix is_mddev_idle() Yu Kuai
2025-04-18 1:09 ` [PATCH v2 1/5] block: cleanup and export bdev IO inflight APIs Yu Kuai
2025-04-18 1:09 ` [PATCH v2 2/5] md: record dm-raid gendisk in mddev Yu Kuai
@ 2025-04-18 1:09 ` Yu Kuai
2025-04-22 6:15 ` Xiao Ni
2025-04-18 1:09 ` [PATCH v2 4/5] md: fix is_mddev_idle() Yu Kuai
2025-04-18 1:09 ` [PATCH v2 5/5] md: cleanup accounting for issued sync IO Yu Kuai
4 siblings, 1 reply; 18+ messages in thread
From: Yu Kuai @ 2025-04-18 1:09 UTC (permalink / raw)
To: axboe, xni, agk, snitzer, mpatocka, song, yukuai3, viro, akpm,
nadav.amit, ubizjak, cl
Cc: linux-block, linux-kernel, dm-devel, linux-raid, yukuai1,
yi.zhang, yangerkun, johnny.chenyi
From: Yu Kuai <yukuai3@huawei.com>
Currently if sync speed is above speed_min and below speed_max,
md_do_sync() will wait for all sync IOs to be done before issuing new
sync IO, means sync IO depth is limited to just 1.
This limit is too low, in order to prevent sync speed drop conspicuously
after fixing is_mddev_idle() in the next patch, add a new api for
limiting sync IO depth, the default value is 32.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
drivers/md/md.c | 109 +++++++++++++++++++++++++++++++++++++++---------
drivers/md/md.h | 1 +
2 files changed, 91 insertions(+), 19 deletions(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 438e71e45c16..52cadfce7e8d 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -111,32 +111,48 @@ static void md_wakeup_thread_directly(struct md_thread __rcu *thread);
/* Default safemode delay: 200 msec */
#define DEFAULT_SAFEMODE_DELAY ((200 * HZ)/1000 +1)
/*
- * Current RAID-1,4,5 parallel reconstruction 'guaranteed speed limit'
- * is 1000 KB/sec, so the extra system load does not show up that much.
- * Increase it if you want to have more _guaranteed_ speed. Note that
- * the RAID driver will use the maximum available bandwidth if the IO
- * subsystem is idle. There is also an 'absolute maximum' reconstruction
- * speed limit - in case reconstruction slows down your system despite
- * idle IO detection.
+ * Current RAID-1,4,5,6,10 parallel reconstruction 'guaranteed speed limit'
+ * is sysctl_speed_limit_min, 1000 KB/sec by default, so the extra system load
+ * does not show up that much. Increase it if you want to have more guaranteed
+ * speed. Note that the RAID driver will use the maximum bandwidth
+ * sysctl_speed_limit_max, 200 MB/sec by default, if the IO subsystem is idle.
*
- * you can change it via /proc/sys/dev/raid/speed_limit_min and _max.
- * or /sys/block/mdX/md/sync_speed_{min,max}
+ * Background sync IO speed control:
+ *
+ * - below speed min:
+ * no limit;
+ * - above speed min and below speed max:
+ * a) if mddev is idle, then no limit;
+ * b) if mddev is busy handling normal IO, then limit inflight sync IO
+ * to sync_io_depth;
+ * - above speed max:
+ * sync IO can't be issued;
+ *
+ * Following configurations can be changed via /proc/sys/dev/raid/ for system
+ * or /sys/block/mdX/md/ for one array.
*/
-
static int sysctl_speed_limit_min = 1000;
static int sysctl_speed_limit_max = 200000;
-static inline int speed_min(struct mddev *mddev)
+static int sysctl_sync_io_depth = 32;
+
+static int speed_min(struct mddev *mddev)
{
return mddev->sync_speed_min ?
mddev->sync_speed_min : sysctl_speed_limit_min;
}
-static inline int speed_max(struct mddev *mddev)
+static int speed_max(struct mddev *mddev)
{
return mddev->sync_speed_max ?
mddev->sync_speed_max : sysctl_speed_limit_max;
}
+static int sync_io_depth(struct mddev *mddev)
+{
+ return mddev->sync_io_depth ?
+ mddev->sync_io_depth : sysctl_sync_io_depth;
+}
+
static void rdev_uninit_serial(struct md_rdev *rdev)
{
if (!test_and_clear_bit(CollisionCheck, &rdev->flags))
@@ -293,14 +309,21 @@ static const struct ctl_table raid_table[] = {
.procname = "speed_limit_min",
.data = &sysctl_speed_limit_min,
.maxlen = sizeof(int),
- .mode = S_IRUGO|S_IWUSR,
+ .mode = 0644,
.proc_handler = proc_dointvec,
},
{
.procname = "speed_limit_max",
.data = &sysctl_speed_limit_max,
.maxlen = sizeof(int),
- .mode = S_IRUGO|S_IWUSR,
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ },
+ {
+ .procname = "sync_io_depth",
+ .data = &sysctl_sync_io_depth,
+ .maxlen = sizeof(int),
+ .mode = 0644,
.proc_handler = proc_dointvec,
},
};
@@ -5091,7 +5114,7 @@ static ssize_t
sync_min_show(struct mddev *mddev, char *page)
{
return sprintf(page, "%d (%s)\n", speed_min(mddev),
- mddev->sync_speed_min ? "local": "system");
+ mddev->sync_speed_min ? "local" : "system");
}
static ssize_t
@@ -5100,7 +5123,7 @@ sync_min_store(struct mddev *mddev, const char *buf, size_t len)
unsigned int min;
int rv;
- if (strncmp(buf, "system", 6)==0) {
+ if (strncmp(buf, "system", 6) == 0) {
min = 0;
} else {
rv = kstrtouint(buf, 10, &min);
@@ -5120,7 +5143,7 @@ static ssize_t
sync_max_show(struct mddev *mddev, char *page)
{
return sprintf(page, "%d (%s)\n", speed_max(mddev),
- mddev->sync_speed_max ? "local": "system");
+ mddev->sync_speed_max ? "local" : "system");
}
static ssize_t
@@ -5129,7 +5152,7 @@ sync_max_store(struct mddev *mddev, const char *buf, size_t len)
unsigned int max;
int rv;
- if (strncmp(buf, "system", 6)==0) {
+ if (strncmp(buf, "system", 6) == 0) {
max = 0;
} else {
rv = kstrtouint(buf, 10, &max);
@@ -5145,6 +5168,35 @@ sync_max_store(struct mddev *mddev, const char *buf, size_t len)
static struct md_sysfs_entry md_sync_max =
__ATTR(sync_speed_max, S_IRUGO|S_IWUSR, sync_max_show, sync_max_store);
+static ssize_t
+sync_io_depth_show(struct mddev *mddev, char *page)
+{
+ return sprintf(page, "%d (%s)\n", sync_io_depth(mddev),
+ mddev->sync_io_depth ? "local" : "system");
+}
+
+static ssize_t
+sync_io_depth_store(struct mddev *mddev, const char *buf, size_t len)
+{
+ unsigned int max;
+ int rv;
+
+ if (strncmp(buf, "system", 6) == 0) {
+ max = 0;
+ } else {
+ rv = kstrtouint(buf, 10, &max);
+ if (rv < 0)
+ return rv;
+ if (max == 0)
+ return -EINVAL;
+ }
+ mddev->sync_io_depth = max;
+ return len;
+}
+
+static struct md_sysfs_entry md_sync_io_depth =
+__ATTR_RW(sync_io_depth);
+
static ssize_t
degraded_show(struct mddev *mddev, char *page)
{
@@ -5671,6 +5723,7 @@ static struct attribute *md_redundancy_attrs[] = {
&md_mismatches.attr,
&md_sync_min.attr,
&md_sync_max.attr,
+ &md_sync_io_depth.attr,
&md_sync_speed.attr,
&md_sync_force_parallel.attr,
&md_sync_completed.attr,
@@ -8927,6 +8980,23 @@ static sector_t md_sync_position(struct mddev *mddev, enum sync_action action)
}
}
+static bool sync_io_within_limit(struct mddev *mddev)
+{
+ int io_sectors;
+
+ /*
+ * For raid456, sync IO is stripe(4k) per IO, for other levels, it's
+ * RESYNC_PAGES(64k) per IO.
+ */
+ if (mddev->level == 4 || mddev->level == 5 || mddev->level == 6)
+ io_sectors = 8;
+ else
+ io_sectors = 128;
+
+ return atomic_read(&mddev->recovery_active) <
+ io_sectors * sync_io_depth(mddev);
+}
+
#define SYNC_MARKS 10
#define SYNC_MARK_STEP (3*HZ)
#define UPDATE_FREQUENCY (5*60*HZ)
@@ -9195,7 +9265,8 @@ void md_do_sync(struct md_thread *thread)
msleep(500);
goto repeat;
}
- if (!is_mddev_idle(mddev, 0)) {
+ if (!sync_io_within_limit(mddev) &&
+ !is_mddev_idle(mddev, 0)) {
/*
* Give other IO more of a chance.
* The faster the devices, the less we wait.
diff --git a/drivers/md/md.h b/drivers/md/md.h
index 9d55b4630077..b57842188f18 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -484,6 +484,7 @@ struct mddev {
/* if zero, use the system-wide default */
int sync_speed_min;
int sync_speed_max;
+ int sync_io_depth;
/* resync even though the same disks are shared among md-devices */
int parallel_resync;
--
2.39.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 4/5] md: fix is_mddev_idle()
2025-04-18 1:09 [PATCH v2 0/5] md: fix is_mddev_idle() Yu Kuai
` (2 preceding siblings ...)
2025-04-18 1:09 ` [PATCH v2 3/5] md: add a new api sync_io_depth Yu Kuai
@ 2025-04-18 1:09 ` Yu Kuai
2025-04-19 1:42 ` Su Yue
2025-04-22 6:35 ` Xiao Ni
2025-04-18 1:09 ` [PATCH v2 5/5] md: cleanup accounting for issued sync IO Yu Kuai
4 siblings, 2 replies; 18+ messages in thread
From: Yu Kuai @ 2025-04-18 1:09 UTC (permalink / raw)
To: axboe, xni, agk, snitzer, mpatocka, song, yukuai3, viro, akpm,
nadav.amit, ubizjak, cl
Cc: linux-block, linux-kernel, dm-devel, linux-raid, yukuai1,
yi.zhang, yangerkun, johnny.chenyi
From: Yu Kuai <yukuai3@huawei.com>
If sync_speed is above speed_min, then is_mddev_idle() will be called
for each sync IO to check if the array is idle, and inflihgt sync_io
will be limited if the array is not idle.
However, while mkfs.ext4 for a large raid5 array while recovery is in
progress, it's found that sync_speed is already above speed_min while
lots of stripes are used for sync IO, causing long delay for mkfs.ext4.
Root cause is the following checking from is_mddev_idle():
t1: submit sync IO: events1 = completed IO - issued sync IO
t2: submit next sync IO: events2 = completed IO - issued sync IO
if (events2 - events1 > 64)
For consequence, the more sync IO issued, the less likely checking will
pass. And when completed normal IO is more than issued sync IO, the
condition will finally pass and is_mddev_idle() will return false,
however, last_events will be updated hence is_mddev_idle() can only
return false once in a while.
Fix this problem by changing the checking as following:
1) mddev doesn't have normal IO completed;
2) mddev doesn't have normal IO inflight;
3) if any member disks is partition, and all other partitions doesn't
have IO completed.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
drivers/md/md.c | 84 +++++++++++++++++++++++++++----------------------
drivers/md/md.h | 3 +-
2 files changed, 48 insertions(+), 39 deletions(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 52cadfce7e8d..dfd85a5d6112 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -8625,50 +8625,58 @@ void md_cluster_stop(struct mddev *mddev)
put_cluster_ops(mddev);
}
-static int is_mddev_idle(struct mddev *mddev, int init)
+static bool is_rdev_holder_idle(struct md_rdev *rdev, bool init)
{
+ unsigned long last_events = rdev->last_events;
+
+ if (!bdev_is_partition(rdev->bdev))
+ return true;
+
+ /*
+ * If rdev is partition, and user doesn't issue IO to the array, the
+ * array is still not idle if user issues IO to other partitions.
+ */
+ rdev->last_events = part_stat_read_accum(rdev->bdev->bd_disk->part0,
+ sectors) -
+ part_stat_read_accum(rdev->bdev, sectors);
+
+ if (!init && rdev->last_events > last_events)
+ return false;
+
+ return true;
+}
+
+/*
+ * mddev is idle if following conditions are match since last check:
+ * 1) mddev doesn't have normal IO completed;
+ * 2) mddev doesn't have inflight normal IO;
+ * 3) if any member disk is partition, and other partitions doesn't have IO
+ * completed;
+ *
+ * Noted this checking rely on IO accounting is enabled.
+ */
+static bool is_mddev_idle(struct mddev *mddev, int init)
+{
+ unsigned long last_events = mddev->last_events;
+ struct gendisk *disk;
struct md_rdev *rdev;
- int idle;
- int curr_events;
+ bool idle = true;
- idle = 1;
- rcu_read_lock();
- rdev_for_each_rcu(rdev, mddev) {
- struct gendisk *disk = rdev->bdev->bd_disk;
+ disk = mddev_is_dm(mddev) ? mddev->dm_gendisk : mddev->gendisk;
+ if (!disk)
+ return true;
- if (!init && !blk_queue_io_stat(disk->queue))
- continue;
+ mddev->last_events = part_stat_read_accum(disk->part0, sectors);
+ if (!init && (mddev->last_events > last_events ||
+ bdev_count_inflight(disk->part0)))
+ idle = false;
- curr_events = (int)part_stat_read_accum(disk->part0, sectors) -
- atomic_read(&disk->sync_io);
- /* sync IO will cause sync_io to increase before the disk_stats
- * as sync_io is counted when a request starts, and
- * disk_stats is counted when it completes.
- * So resync activity will cause curr_events to be smaller than
- * when there was no such activity.
- * non-sync IO will cause disk_stat to increase without
- * increasing sync_io so curr_events will (eventually)
- * be larger than it was before. Once it becomes
- * substantially larger, the test below will cause
- * the array to appear non-idle, and resync will slow
- * down.
- * If there is a lot of outstanding resync activity when
- * we set last_event to curr_events, then all that activity
- * completing might cause the array to appear non-idle
- * and resync will be slowed down even though there might
- * not have been non-resync activity. This will only
- * happen once though. 'last_events' will soon reflect
- * the state where there is little or no outstanding
- * resync requests, and further resync activity will
- * always make curr_events less than last_events.
- *
- */
- if (init || curr_events - rdev->last_events > 64) {
- rdev->last_events = curr_events;
- idle = 0;
- }
- }
+ rcu_read_lock();
+ rdev_for_each_rcu(rdev, mddev)
+ if (!is_rdev_holder_idle(rdev, init))
+ idle = false;
rcu_read_unlock();
+
return idle;
}
diff --git a/drivers/md/md.h b/drivers/md/md.h
index b57842188f18..1d51c2405d3d 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -132,7 +132,7 @@ struct md_rdev {
sector_t sectors; /* Device size (in 512bytes sectors) */
struct mddev *mddev; /* RAID array if running */
- int last_events; /* IO event timestamp */
+ unsigned long last_events; /* IO event timestamp */
/*
* If meta_bdev is non-NULL, it means that a separate device is
@@ -520,6 +520,7 @@ struct mddev {
* adding a spare
*/
+ unsigned long last_events; /* IO event timestamp */
atomic_t recovery_active; /* blocks scheduled, but not written */
wait_queue_head_t recovery_wait;
sector_t recovery_cp;
--
2.39.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 5/5] md: cleanup accounting for issued sync IO
2025-04-18 1:09 [PATCH v2 0/5] md: fix is_mddev_idle() Yu Kuai
` (3 preceding siblings ...)
2025-04-18 1:09 ` [PATCH v2 4/5] md: fix is_mddev_idle() Yu Kuai
@ 2025-04-18 1:09 ` Yu Kuai
2025-04-22 6:36 ` Xiao Ni
4 siblings, 1 reply; 18+ messages in thread
From: Yu Kuai @ 2025-04-18 1:09 UTC (permalink / raw)
To: axboe, xni, agk, snitzer, mpatocka, song, yukuai3, viro, akpm,
nadav.amit, ubizjak, cl
Cc: linux-block, linux-kernel, dm-devel, linux-raid, yukuai1,
yi.zhang, yangerkun, johnny.chenyi
From: Yu Kuai <yukuai3@huawei.com>
It's no longer used and can be removed, also remove the field
'gendisk->sync_io'.
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
drivers/md/md.h | 11 -----------
drivers/md/raid1.c | 3 ---
drivers/md/raid10.c | 9 ---------
drivers/md/raid5.c | 8 --------
include/linux/blkdev.h | 1 -
5 files changed, 32 deletions(-)
diff --git a/drivers/md/md.h b/drivers/md/md.h
index 1d51c2405d3d..07b40a55ed5f 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -717,17 +717,6 @@ static inline int mddev_trylock(struct mddev *mddev)
}
extern void mddev_unlock(struct mddev *mddev);
-static inline void md_sync_acct(struct block_device *bdev, unsigned long nr_sectors)
-{
- if (blk_queue_io_stat(bdev->bd_disk->queue))
- atomic_add(nr_sectors, &bdev->bd_disk->sync_io);
-}
-
-static inline void md_sync_acct_bio(struct bio *bio, unsigned long nr_sectors)
-{
- md_sync_acct(bio->bi_bdev, nr_sectors);
-}
-
struct md_personality
{
struct md_submodule_head head;
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index de9bccbe7337..657d481525be 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -2382,7 +2382,6 @@ static void sync_request_write(struct mddev *mddev, struct r1bio *r1_bio)
wbio->bi_end_io = end_sync_write;
atomic_inc(&r1_bio->remaining);
- md_sync_acct(conf->mirrors[i].rdev->bdev, bio_sectors(wbio));
submit_bio_noacct(wbio);
}
@@ -3055,7 +3054,6 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
bio = r1_bio->bios[i];
if (bio->bi_end_io == end_sync_read) {
read_targets--;
- md_sync_acct_bio(bio, nr_sectors);
if (read_targets == 1)
bio->bi_opf &= ~MD_FAILFAST;
submit_bio_noacct(bio);
@@ -3064,7 +3062,6 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
} else {
atomic_set(&r1_bio->remaining, 1);
bio = r1_bio->bios[r1_bio->read_disk];
- md_sync_acct_bio(bio, nr_sectors);
if (read_targets == 1)
bio->bi_opf &= ~MD_FAILFAST;
submit_bio_noacct(bio);
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index ba32bac975b8..dce06bf65016 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -2426,7 +2426,6 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio)
atomic_inc(&conf->mirrors[d].rdev->nr_pending);
atomic_inc(&r10_bio->remaining);
- md_sync_acct(conf->mirrors[d].rdev->bdev, bio_sectors(tbio));
if (test_bit(FailFast, &conf->mirrors[d].rdev->flags))
tbio->bi_opf |= MD_FAILFAST;
@@ -2448,8 +2447,6 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio)
bio_copy_data(tbio, fbio);
d = r10_bio->devs[i].devnum;
atomic_inc(&r10_bio->remaining);
- md_sync_acct(conf->mirrors[d].replacement->bdev,
- bio_sectors(tbio));
submit_bio_noacct(tbio);
}
@@ -2583,13 +2580,10 @@ static void recovery_request_write(struct mddev *mddev, struct r10bio *r10_bio)
d = r10_bio->devs[1].devnum;
if (wbio->bi_end_io) {
atomic_inc(&conf->mirrors[d].rdev->nr_pending);
- md_sync_acct(conf->mirrors[d].rdev->bdev, bio_sectors(wbio));
submit_bio_noacct(wbio);
}
if (wbio2) {
atomic_inc(&conf->mirrors[d].replacement->nr_pending);
- md_sync_acct(conf->mirrors[d].replacement->bdev,
- bio_sectors(wbio2));
submit_bio_noacct(wbio2);
}
}
@@ -3757,7 +3751,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
r10_bio->sectors = nr_sectors;
if (bio->bi_end_io == end_sync_read) {
- md_sync_acct_bio(bio, nr_sectors);
bio->bi_status = 0;
submit_bio_noacct(bio);
}
@@ -4880,7 +4873,6 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr,
r10_bio->sectors = nr_sectors;
/* Now submit the read */
- md_sync_acct_bio(read_bio, r10_bio->sectors);
atomic_inc(&r10_bio->remaining);
read_bio->bi_next = NULL;
submit_bio_noacct(read_bio);
@@ -4940,7 +4932,6 @@ static void reshape_request_write(struct mddev *mddev, struct r10bio *r10_bio)
continue;
atomic_inc(&rdev->nr_pending);
- md_sync_acct_bio(b, r10_bio->sectors);
atomic_inc(&r10_bio->remaining);
b->bi_next = NULL;
submit_bio_noacct(b);
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 6389383166c0..ca5b0e8ba707 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -1240,10 +1240,6 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
}
if (rdev) {
- if (s->syncing || s->expanding || s->expanded
- || s->replacing)
- md_sync_acct(rdev->bdev, RAID5_STRIPE_SECTORS(conf));
-
set_bit(STRIPE_IO_STARTED, &sh->state);
bio_init(bi, rdev->bdev, &dev->vec, 1, op | op_flags);
@@ -1300,10 +1296,6 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
submit_bio_noacct(bi);
}
if (rrdev) {
- if (s->syncing || s->expanding || s->expanded
- || s->replacing)
- md_sync_acct(rrdev->bdev, RAID5_STRIPE_SECTORS(conf));
-
set_bit(STRIPE_IO_STARTED, &sh->state);
bio_init(rbi, rrdev->bdev, &dev->rvec, 1, op | op_flags);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index e39c45bc0a97..f3a625b00734 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -182,7 +182,6 @@ struct gendisk {
struct list_head slave_bdevs;
#endif
struct timer_rand_state *random;
- atomic_t sync_io; /* RAID */
struct disk_events *ev;
#ifdef CONFIG_BLK_DEV_ZONED
--
2.39.2
^ permalink raw reply related [flat|nested] 18+ messages in thread
* Re: [PATCH v2 4/5] md: fix is_mddev_idle()
2025-04-18 1:09 ` [PATCH v2 4/5] md: fix is_mddev_idle() Yu Kuai
@ 2025-04-19 1:42 ` Su Yue
2025-04-19 2:00 ` Yu Kuai
2025-04-22 6:35 ` Xiao Ni
1 sibling, 1 reply; 18+ messages in thread
From: Su Yue @ 2025-04-19 1:42 UTC (permalink / raw)
To: Yu Kuai
Cc: axboe, xni, agk, snitzer, mpatocka, song, yukuai3, viro, akpm,
nadav.amit, ubizjak, cl, linux-block, linux-kernel, dm-devel,
linux-raid, yi.zhang, yangerkun, johnny.chenyi
On Fri 18 Apr 2025 at 09:09, Yu Kuai <yukuai1@huaweicloud.com>
wrote:
> From: Yu Kuai <yukuai3@huawei.com>
>
> If sync_speed is above speed_min, then is_mddev_idle() will be
> called
> for each sync IO to check if the array is idle, and inflihgt
> sync_io
> will be limited if the array is not idle.
>
> However, while mkfs.ext4 for a large raid5 array while recovery
> is in
> progress, it's found that sync_speed is already above speed_min
> while
> lots of stripes are used for sync IO, causing long delay for
> mkfs.ext4.
>
> Root cause is the following checking from is_mddev_idle():
>
> t1: submit sync IO: events1 = completed IO - issued sync IO
> t2: submit next sync IO: events2 = completed IO - issued sync
> IO
> if (events2 - events1 > 64)
>
> For consequence, the more sync IO issued, the less likely
> checking will
> pass. And when completed normal IO is more than issued sync IO,
> the
> condition will finally pass and is_mddev_idle() will return
> false,
> however, last_events will be updated hence is_mddev_idle() can
> only
> return false once in a while.
>
> Fix this problem by changing the checking as following:
>
> 1) mddev doesn't have normal IO completed;
> 2) mddev doesn't have normal IO inflight;
> 3) if any member disks is partition, and all other partitions
> doesn't
> have IO completed.
>
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
> drivers/md/md.c | 84
> +++++++++++++++++++++++++++----------------------
> drivers/md/md.h | 3 +-
> 2 files changed, 48 insertions(+), 39 deletions(-)
>
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 52cadfce7e8d..dfd85a5d6112 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -8625,50 +8625,58 @@ void md_cluster_stop(struct mddev
> *mddev)
> put_cluster_ops(mddev);
> }
>
> -static int is_mddev_idle(struct mddev *mddev, int init)
> +static bool is_rdev_holder_idle(struct md_rdev *rdev, bool
> init)
> {
> + unsigned long last_events = rdev->last_events;
> +
> + if (!bdev_is_partition(rdev->bdev))
> + return true;
> +
> + /*
> + * If rdev is partition, and user doesn't issue IO to the
> array, the
> + * array is still not idle if user issues IO to other
> partitions.
> + */
> + rdev->last_events =
> part_stat_read_accum(rdev->bdev->bd_disk->part0,
> + sectors) -
> + part_stat_read_accum(rdev->bdev, sectors);
> +
> + if (!init && rdev->last_events > last_events)
> + return false;
> +
> + return true;
> +}
> +
> +/*
> + * mddev is idle if following conditions are match since last
> check:
> + * 1) mddev doesn't have normal IO completed;
> + * 2) mddev doesn't have inflight normal IO;
> + * 3) if any member disk is partition, and other partitions
> doesn't have IO
> + * completed;
> + *
> + * Noted this checking rely on IO accounting is enabled.
> + */
> +static bool is_mddev_idle(struct mddev *mddev, int init)
> +{
> + unsigned long last_events = mddev->last_events;
> + struct gendisk *disk;
> struct md_rdev *rdev;
> - int idle;
> - int curr_events;
> + bool idle = true;
>
> - idle = 1;
> - rcu_read_lock();
> - rdev_for_each_rcu(rdev, mddev) {
> - struct gendisk *disk = rdev->bdev->bd_disk;
> + disk = mddev_is_dm(mddev) ? mddev->dm_gendisk :
> mddev->gendisk;
> + if (!disk)
> + return true;
>
> - if (!init && !blk_queue_io_stat(disk->queue))
> - continue;
> + mddev->last_events = part_stat_read_accum(disk->part0,
> sectors);
> + if (!init && (mddev->last_events > last_events ||
> + bdev_count_inflight(disk->part0)))
> + idle = false;
>
Forgot return or goto here?
--
Su
> - curr_events = (int)part_stat_read_accum(disk->part0,
> sectors) -
> - atomic_read(&disk->sync_io);
> - /* sync IO will cause sync_io to increase before the
> disk_stats
> - * as sync_io is counted when a request starts, and
> - * disk_stats is counted when it completes.
> - * So resync activity will cause curr_events to be smaller
> than
> - * when there was no such activity.
> - * non-sync IO will cause disk_stat to increase without
> - * increasing sync_io so curr_events will (eventually)
> - * be larger than it was before. Once it becomes
> - * substantially larger, the test below will cause
> - * the array to appear non-idle, and resync will slow
> - * down.
> - * If there is a lot of outstanding resync activity when
> - * we set last_event to curr_events, then all that
> activity
> - * completing might cause the array to appear non-idle
> - * and resync will be slowed down even though there might
> - * not have been non-resync activity. This will only
> - * happen once though. 'last_events' will soon reflect
> - * the state where there is little or no outstanding
> - * resync requests, and further resync activity will
> - * always make curr_events less than last_events.
> - *
> - */
> - if (init || curr_events - rdev->last_events > 64) {
> - rdev->last_events = curr_events;
> - idle = 0;
> - }
> - }
> + rcu_read_lock();
> + rdev_for_each_rcu(rdev, mddev)
> + if (!is_rdev_holder_idle(rdev, init))
> + idle = false;
> rcu_read_unlock();
> +
> return idle;
> }
>
> diff --git a/drivers/md/md.h b/drivers/md/md.h
> index b57842188f18..1d51c2405d3d 100644
> --- a/drivers/md/md.h
> +++ b/drivers/md/md.h
> @@ -132,7 +132,7 @@ struct md_rdev {
>
> sector_t sectors; /* Device size (in 512bytes sectors)
> */
> struct mddev *mddev; /* RAID array if running */
> - int last_events; /* IO event timestamp */
> + unsigned long last_events; /* IO event timestamp */
>
> /*
> * If meta_bdev is non-NULL, it means that a separate device
> is
> @@ -520,6 +520,7 @@ struct mddev {
> * adding a spare
> */
>
> + unsigned long last_events; /* IO event timestamp
> */
> atomic_t recovery_active; /* blocks scheduled, but
> not written */
> wait_queue_head_t recovery_wait;
> sector_t recovery_cp;
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 4/5] md: fix is_mddev_idle()
2025-04-19 1:42 ` Su Yue
@ 2025-04-19 2:00 ` Yu Kuai
2025-04-19 5:03 ` Su Yue
0 siblings, 1 reply; 18+ messages in thread
From: Yu Kuai @ 2025-04-19 2:00 UTC (permalink / raw)
To: Su Yue, Yu Kuai
Cc: axboe, xni, agk, snitzer, mpatocka, song, viro, akpm, nadav.amit,
ubizjak, cl, linux-block, linux-kernel, dm-devel, linux-raid,
yi.zhang, yangerkun, johnny.chenyi, yukuai (C)
Hi,
在 2025/04/19 9:42, Su Yue 写道:
> On Fri 18 Apr 2025 at 09:09, Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
>> From: Yu Kuai <yukuai3@huawei.com>
>>
>> If sync_speed is above speed_min, then is_mddev_idle() will be called
>> for each sync IO to check if the array is idle, and inflihgt sync_io
>> will be limited if the array is not idle.
>>
>> However, while mkfs.ext4 for a large raid5 array while recovery is in
>> progress, it's found that sync_speed is already above speed_min while
>> lots of stripes are used for sync IO, causing long delay for mkfs.ext4.
>>
>> Root cause is the following checking from is_mddev_idle():
>>
>> t1: submit sync IO: events1 = completed IO - issued sync IO
>> t2: submit next sync IO: events2 = completed IO - issued sync IO
>> if (events2 - events1 > 64)
>>
>> For consequence, the more sync IO issued, the less likely checking will
>> pass. And when completed normal IO is more than issued sync IO, the
>> condition will finally pass and is_mddev_idle() will return false,
>> however, last_events will be updated hence is_mddev_idle() can only
>> return false once in a while.
>>
>> Fix this problem by changing the checking as following:
>>
>> 1) mddev doesn't have normal IO completed;
>> 2) mddev doesn't have normal IO inflight;
>> 3) if any member disks is partition, and all other partitions doesn't
>> have IO completed.
>>
>> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>> ---
>> drivers/md/md.c | 84 +++++++++++++++++++++++++++----------------------
>> drivers/md/md.h | 3 +-
>> 2 files changed, 48 insertions(+), 39 deletions(-)
>>
>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>> index 52cadfce7e8d..dfd85a5d6112 100644
>> --- a/drivers/md/md.c
>> +++ b/drivers/md/md.c
>> @@ -8625,50 +8625,58 @@ void md_cluster_stop(struct mddev *mddev)
>> put_cluster_ops(mddev);
>> }
>>
>> -static int is_mddev_idle(struct mddev *mddev, int init)
>> +static bool is_rdev_holder_idle(struct md_rdev *rdev, bool init)
>> {
>> + unsigned long last_events = rdev->last_events;
>> +
>> + if (!bdev_is_partition(rdev->bdev))
>> + return true;
>> +
>> + /*
>> + * If rdev is partition, and user doesn't issue IO to the array, the
>> + * array is still not idle if user issues IO to other partitions.
>> + */
>> + rdev->last_events = part_stat_read_accum(rdev->bdev->bd_disk->part0,
>> + sectors) -
>> + part_stat_read_accum(rdev->bdev, sectors);
>> +
>> + if (!init && rdev->last_events > last_events)
>> + return false;
>> +
>> + return true;
>> +}
>> +
>> +/*
>> + * mddev is idle if following conditions are match since last check:
>> + * 1) mddev doesn't have normal IO completed;
>> + * 2) mddev doesn't have inflight normal IO;
>> + * 3) if any member disk is partition, and other partitions doesn't
>> have IO
>> + * completed;
>> + *
>> + * Noted this checking rely on IO accounting is enabled.
>> + */
>> +static bool is_mddev_idle(struct mddev *mddev, int init)
>> +{
>> + unsigned long last_events = mddev->last_events;
>> + struct gendisk *disk;
>> struct md_rdev *rdev;
>> - int idle;
>> - int curr_events;
>> + bool idle = true;
>>
>> - idle = 1;
>> - rcu_read_lock();
>> - rdev_for_each_rcu(rdev, mddev) {
>> - struct gendisk *disk = rdev->bdev->bd_disk;
>> + disk = mddev_is_dm(mddev) ? mddev->dm_gendisk : mddev->gendisk;
>> + if (!disk)
>> + return true;
>>
>> - if (!init && !blk_queue_io_stat(disk->queue))
>> - continue;
>> + mddev->last_events = part_stat_read_accum(disk->part0, sectors);
>> + if (!init && (mddev->last_events > last_events ||
>> + bdev_count_inflight(disk->part0)))
>> + idle = false;
>>
>
> Forgot return or goto here?
No, following still need to be executed to init or update
rdev->last_events.k
Thanks,
Kuai
>
> --
> Su
>
>> - curr_events = (int)part_stat_read_accum(disk->part0, sectors) -
>> - atomic_read(&disk->sync_io);
>> - /* sync IO will cause sync_io to increase before the disk_stats
>> - * as sync_io is counted when a request starts, and
>> - * disk_stats is counted when it completes.
>> - * So resync activity will cause curr_events to be smaller than
>> - * when there was no such activity.
>> - * non-sync IO will cause disk_stat to increase without
>> - * increasing sync_io so curr_events will (eventually)
>> - * be larger than it was before. Once it becomes
>> - * substantially larger, the test below will cause
>> - * the array to appear non-idle, and resync will slow
>> - * down.
>> - * If there is a lot of outstanding resync activity when
>> - * we set last_event to curr_events, then all that activity
>> - * completing might cause the array to appear non-idle
>> - * and resync will be slowed down even though there might
>> - * not have been non-resync activity. This will only
>> - * happen once though. 'last_events' will soon reflect
>> - * the state where there is little or no outstanding
>> - * resync requests, and further resync activity will
>> - * always make curr_events less than last_events.
>> - *
>> - */
>> - if (init || curr_events - rdev->last_events > 64) {
>> - rdev->last_events = curr_events;
>> - idle = 0;
>> - }
>> - }
>> + rcu_read_lock();
>> + rdev_for_each_rcu(rdev, mddev)
>> + if (!is_rdev_holder_idle(rdev, init))
>> + idle = false;
>> rcu_read_unlock();
>> +
>> return idle;
>> }
>>
>> diff --git a/drivers/md/md.h b/drivers/md/md.h
>> index b57842188f18..1d51c2405d3d 100644
>> --- a/drivers/md/md.h
>> +++ b/drivers/md/md.h
>> @@ -132,7 +132,7 @@ struct md_rdev {
>>
>> sector_t sectors; /* Device size (in 512bytes sectors) */
>> struct mddev *mddev; /* RAID array if running */
>> - int last_events; /* IO event timestamp */
>> + unsigned long last_events; /* IO event timestamp */
>>
>> /*
>> * If meta_bdev is non-NULL, it means that a separate device is
>> @@ -520,6 +520,7 @@ struct mddev {
>> * adding a spare
>> */
>>
>> + unsigned long last_events; /* IO event timestamp */
>> atomic_t recovery_active; /* blocks scheduled, but
>> not written */
>> wait_queue_head_t recovery_wait;
>> sector_t recovery_cp;
> .
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 4/5] md: fix is_mddev_idle()
2025-04-19 2:00 ` Yu Kuai
@ 2025-04-19 5:03 ` Su Yue
0 siblings, 0 replies; 18+ messages in thread
From: Su Yue @ 2025-04-19 5:03 UTC (permalink / raw)
To: Yu Kuai
Cc: axboe, xni, agk, snitzer, mpatocka, song, viro, akpm, nadav.amit,
ubizjak, cl, linux-block, linux-kernel, dm-devel, linux-raid,
yi.zhang, yangerkun, johnny.chenyi, yukuai (C)
On Sat 19 Apr 2025 at 10:00, Yu Kuai <yukuai1@huaweicloud.com>
wrote:
> Hi,
>
> 在 2025/04/19 9:42, Su Yue 写道:
>> On Fri 18 Apr 2025 at 09:09, Yu Kuai <yukuai1@huaweicloud.com>
>> wrote:
>>
>>> From: Yu Kuai <yukuai3@huawei.com>
>>>
>>> If sync_speed is above speed_min, then is_mddev_idle() will be
>>> called
>>> for each sync IO to check if the array is idle, and inflihgt
>>> sync_io
>>> will be limited if the array is not idle.
>>>
>>> However, while mkfs.ext4 for a large raid5 array while
>>> recovery is in
>>> progress, it's found that sync_speed is already above
>>> speed_min while
>>> lots of stripes are used for sync IO, causing long delay for
>>> mkfs.ext4.
>>>
>>> Root cause is the following checking from is_mddev_idle():
>>>
>>> t1: submit sync IO: events1 = completed IO - issued sync IO
>>> t2: submit next sync IO: events2 = completed IO - issued sync
>>> IO
>>> if (events2 - events1 > 64)
>>>
>>> For consequence, the more sync IO issued, the less likely
>>> checking will
>>> pass. And when completed normal IO is more than issued sync
>>> IO, the
>>> condition will finally pass and is_mddev_idle() will return
>>> false,
>>> however, last_events will be updated hence is_mddev_idle() can
>>> only
>>> return false once in a while.
>>>
>>> Fix this problem by changing the checking as following:
>>>
>>> 1) mddev doesn't have normal IO completed;
>>> 2) mddev doesn't have normal IO inflight;
>>> 3) if any member disks is partition, and all other partitions
>>> doesn't
>>> have IO completed.
>>>
>>> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>>> ---
>>> drivers/md/md.c | 84
>>> +++++++++++++++++++++++++++----------------------
>>> drivers/md/md.h | 3 +-
>>> 2 files changed, 48 insertions(+), 39 deletions(-)
>>>
>>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>>> index 52cadfce7e8d..dfd85a5d6112 100644
>>> --- a/drivers/md/md.c
>>> +++ b/drivers/md/md.c
>>> @@ -8625,50 +8625,58 @@ void md_cluster_stop(struct mddev
>>> *mddev)
>>> put_cluster_ops(mddev);
>>> }
>>>
>>> -static int is_mddev_idle(struct mddev *mddev, int init)
>>> +static bool is_rdev_holder_idle(struct md_rdev *rdev, bool
>>> init)
>>> {
>>> + unsigned long last_events = rdev->last_events;
>>> +
>>> + if (!bdev_is_partition(rdev->bdev))
>>> + return true;
>>> +
>>> + /*
>>> + * If rdev is partition, and user doesn't issue IO to the
>>> array, the
>>> + * array is still not idle if user issues IO to other
>>> partitions.
>>> + */
>>> + rdev->last_events =
>>> part_stat_read_accum(rdev->bdev->bd_disk->part0,
>>> + sectors) -
>>> + part_stat_read_accum(rdev->bdev, sectors);
>>> +
>>> + if (!init && rdev->last_events > last_events)
>>> + return false;
>>> +
>>> + return true;
>>> +}
>>> +
>>> +/*
>>> + * mddev is idle if following conditions are match since last
>>> check:
>>> + * 1) mddev doesn't have normal IO completed;
>>> + * 2) mddev doesn't have inflight normal IO;
>>> + * 3) if any member disk is partition, and other partitions
>>> doesn't have IO
>>> + * completed;
>>> + *
>>> + * Noted this checking rely on IO accounting is enabled.
>>> + */
>>> +static bool is_mddev_idle(struct mddev *mddev, int init)
>>> +{
>>> + unsigned long last_events = mddev->last_events;
>>> + struct gendisk *disk;
>>> struct md_rdev *rdev;
>>> - int idle;
>>> - int curr_events;
>>> + bool idle = true;
>>>
>>> - idle = 1;
>>> - rcu_read_lock();
>>> - rdev_for_each_rcu(rdev, mddev) {
>>> - struct gendisk *disk = rdev->bdev->bd_disk;
>>> + disk = mddev_is_dm(mddev) ? mddev->dm_gendisk :
>>> mddev->gendisk;
>>> + if (!disk)
>>> + return true;
>>>
>>> - if (!init && !blk_queue_io_stat(disk->queue))
>>> - continue;
>>> + mddev->last_events = part_stat_read_accum(disk->part0,
>>> sectors);
>>> + if (!init && (mddev->last_events > last_events ||
>>> + bdev_count_inflight(disk->part0)))
>>> + idle = false;
>>>
>> Forgot return or goto here?
>
> No, following still need to be executed to init or update
> rdev->last_events.k
>
Okay, I see. is_rdev_holder_idle does the work.
--
Su
> Thanks,
> Kuai
>
>> -- Su
>>
>>> - curr_events = (int)part_stat_read_accum(disk->part0,
>>> sectors) -
>>> - atomic_read(&disk->sync_io);
>>> - /* sync IO will cause sync_io to increase before the
>>> disk_stats
>>> - * as sync_io is counted when a request starts, and
>>> - * disk_stats is counted when it completes.
>>> - * So resync activity will cause curr_events to be
>>> smaller than
>>> - * when there was no such activity.
>>> - * non-sync IO will cause disk_stat to increase
>>> without
>>> - * increasing sync_io so curr_events will
>>> (eventually)
>>> - * be larger than it was before. Once it becomes
>>> - * substantially larger, the test below will cause
>>> - * the array to appear non-idle, and resync will slow
>>> - * down.
>>> - * If there is a lot of outstanding resync activity
>>> when
>>> - * we set last_event to curr_events, then all that
>>> activity
>>> - * completing might cause the array to appear
>>> non-idle
>>> - * and resync will be slowed down even though there
>>> might
>>> - * not have been non-resync activity. This will only
>>> - * happen once though. 'last_events' will soon
>>> reflect
>>> - * the state where there is little or no outstanding
>>> - * resync requests, and further resync activity will
>>> - * always make curr_events less than last_events.
>>> - *
>>> - */
>>> - if (init || curr_events - rdev->last_events > 64) {
>>> - rdev->last_events = curr_events;
>>> - idle = 0;
>>> - }
>>> - }
>>> + rcu_read_lock();
>>> + rdev_for_each_rcu(rdev, mddev)
>>> + if (!is_rdev_holder_idle(rdev, init))
>>> + idle = false;
>>> rcu_read_unlock();
>>> +
>>> return idle;
>>> }
>>>
>>> diff --git a/drivers/md/md.h b/drivers/md/md.h
>>> index b57842188f18..1d51c2405d3d 100644
>>> --- a/drivers/md/md.h
>>> +++ b/drivers/md/md.h
>>> @@ -132,7 +132,7 @@ struct md_rdev {
>>>
>>> sector_t sectors; /* Device size (in 512bytes
>>> sectors) */
>>> struct mddev *mddev; /* RAID array if running */
>>> - int last_events; /* IO event timestamp */
>>> + unsigned long last_events; /* IO event timestamp */
>>>
>>> /*
>>> * If meta_bdev is non-NULL, it means that a separate
>>> device is
>>> @@ -520,6 +520,7 @@ struct mddev {
>>> * adding a spare
>>> */
>>>
>>> + unsigned long last_events; /* IO event
>>> timestamp */
>>> atomic_t recovery_active; /* blocks scheduled,
>>> but not
>>> written */
>>> wait_queue_head_t recovery_wait;
>>> sector_t recovery_cp;
>> .
>>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 1/5] block: cleanup and export bdev IO inflight APIs
2025-04-18 1:09 ` [PATCH v2 1/5] block: cleanup and export bdev IO inflight APIs Yu Kuai
@ 2025-04-21 11:59 ` Christoph Hellwig
2025-04-21 13:13 ` Yu Kuai
0 siblings, 1 reply; 18+ messages in thread
From: Christoph Hellwig @ 2025-04-21 11:59 UTC (permalink / raw)
To: Yu Kuai
Cc: axboe, xni, agk, snitzer, mpatocka, song, yukuai3, viro, akpm,
nadav.amit, ubizjak, cl, linux-block, linux-kernel, dm-devel,
linux-raid, yi.zhang, yangerkun, johnny.chenyi
On Fri, Apr 18, 2025 at 09:09:37AM +0800, Yu Kuai wrote:
> - remove unused blk_mq_in_flight
That should probably be a separate patch.
> - rename blk_mq_in_flight_rw to blk_mq_count_in_driver_rw, to distinguish
> from bdev_count_inflight_rw.
I'm not sure why this is needed or related, or even what additional
distinction is added here.
> -
> -void blk_mq_in_flight_rw(struct request_queue *q, struct block_device *part,
> - unsigned int inflight[2])
> +void blk_mq_count_in_driver_rw(struct request_queue *q,
> + struct block_device *part,
> + unsigned int inflight[2])
Any reason to move away from two tab indents for the prototype
continuations in various places in this patch?
> + * Noted, for rq-based block device, use blk_mq_count_in_driver_rw() to get the
> + * number of requests issued to driver.
I'd just change this helper to call blk_mq_count_in_driver_rw for
blk-mq devices and remove the conditional from the sysfs code instead.
That gives us a much more robust and easier to understand API.
> +void bdev_count_inflight_rw(struct block_device *bdev, unsigned int inflight[2]);
Overly long line.
> +static inline unsigned int bdev_count_inflight(struct block_device *bdev)
> +{
> + unsigned int inflight[2];
> +
> + bdev_count_inflight_rw(bdev, inflight);
> +
> + return inflight[0] + inflight[1];
> +}
> #endif /* _LINUX_PART_STAT_H */
Maybe keep this inside of block as it should not not be used by
drivers? Also the reimplementation should probably be a separate
patch from the public API change and exporting.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 1/5] block: cleanup and export bdev IO inflight APIs
2025-04-21 11:59 ` Christoph Hellwig
@ 2025-04-21 13:13 ` Yu Kuai
2025-04-22 6:11 ` Christoph Hellwig
0 siblings, 1 reply; 18+ messages in thread
From: Yu Kuai @ 2025-04-21 13:13 UTC (permalink / raw)
To: Christoph Hellwig, Yu Kuai
Cc: axboe, xni, agk, snitzer, mpatocka, song, viro, akpm, nadav.amit,
ubizjak, cl, linux-block, linux-kernel, dm-devel, linux-raid,
yi.zhang, yangerkun, johnny.chenyi, yukuai (C)
Hi,
在 2025/04/21 19:59, Christoph Hellwig 写道:
> On Fri, Apr 18, 2025 at 09:09:37AM +0800, Yu Kuai wrote:
>> - remove unused blk_mq_in_flight
>
> That should probably be a separate patch.
ok
>
>> - rename blk_mq_in_flight_rw to blk_mq_count_in_driver_rw, to distinguish
>> from bdev_count_inflight_rw.
>
> I'm not sure why this is needed or related, or even what additional
> distinction is added here.
Because for rq-based device, there are two different stage,
blk_account_io_start() while allocating new rq, and
blk_mq_start_request() while issuing the rq to driver.
When will we think the reqeust is inflight? For iostat, my anser is the
former one, because rq->start_time_ns is set here as well. And noted in
iostats api diskstats_show(/proc/diskstats) and part_stat_show
(/sys/block/sda/stat), inflight is get by part_in_flight, which is
different from disk sysfs api(/sys/block/sda/inflight).
>
>> -
>> -void blk_mq_in_flight_rw(struct request_queue *q, struct block_device *part,
>> - unsigned int inflight[2])
>> +void blk_mq_count_in_driver_rw(struct request_queue *q,
>> + struct block_device *part,
>> + unsigned int inflight[2])
>
> Any reason to move away from two tab indents for the prototype
> continuations in various places in this patch?
>
>> + * Noted, for rq-based block device, use blk_mq_count_in_driver_rw() to get the
>> + * number of requests issued to driver.
>
> I'd just change this helper to call blk_mq_count_in_driver_rw for
> blk-mq devices and remove the conditional from the sysfs code instead.
> That gives us a much more robust and easier to understand API.
Ok, and another separate patch, right?
>
>> +void bdev_count_inflight_rw(struct block_device *bdev, unsigned int inflight[2]);
>
> Overly long line.
>
>> +static inline unsigned int bdev_count_inflight(struct block_device *bdev)
>> +{
>> + unsigned int inflight[2];
>> +
>> + bdev_count_inflight_rw(bdev, inflight);
>> +
>> + return inflight[0] + inflight[1];
>> +}
>> #endif /* _LINUX_PART_STAT_H */
>
> Maybe keep this inside of block as it should not not be used by
> drivers? Also the reimplementation should probably be a separate
> patch from the public API change and exporting.
ok, and I should probably send the first set just related to this patch
first.
Thanks,
Kuai
>
> .
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 2/5] md: record dm-raid gendisk in mddev
2025-04-18 1:09 ` [PATCH v2 2/5] md: record dm-raid gendisk in mddev Yu Kuai
@ 2025-04-22 6:00 ` Xiao Ni
0 siblings, 0 replies; 18+ messages in thread
From: Xiao Ni @ 2025-04-22 6:00 UTC (permalink / raw)
To: Yu Kuai
Cc: axboe, agk, snitzer, mpatocka, song, yukuai3, viro, akpm,
nadav.amit, ubizjak, cl, linux-block, linux-kernel, dm-devel,
linux-raid, yi.zhang, yangerkun, johnny.chenyi
On Fri, Apr 18, 2025 at 9:17 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> From: Yu Kuai <yukuai3@huawei.com>
>
> Following patch will use gendisk to check if there are normal IO
> completed or inflight, to fix a problem in mdraid that foreground IO
> can be starved by background sync IO in later patches.
>
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
> drivers/md/dm-raid.c | 3 +++
> drivers/md/md.h | 3 ++-
> 2 files changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
> index 6adc55fd90d3..127138c61be5 100644
> --- a/drivers/md/dm-raid.c
> +++ b/drivers/md/dm-raid.c
> @@ -14,6 +14,7 @@
> #include "raid5.h"
> #include "raid10.h"
> #include "md-bitmap.h"
> +#include "dm-core.h"
>
> #include <linux/device-mapper.h>
>
> @@ -3308,6 +3309,7 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv)
>
> /* Disable/enable discard support on raid set. */
> configure_discard_support(rs);
> + rs->md.dm_gendisk = ti->table->md->disk;
>
> mddev_unlock(&rs->md);
> return 0;
> @@ -3327,6 +3329,7 @@ static void raid_dtr(struct dm_target *ti)
>
> mddev_lock_nointr(&rs->md);
> md_stop(&rs->md);
> + rs->md.dm_gendisk = NULL;
> mddev_unlock(&rs->md);
>
> if (work_pending(&rs->md.event_work))
> diff --git a/drivers/md/md.h b/drivers/md/md.h
> index 1cf00a04bcdd..9d55b4630077 100644
> --- a/drivers/md/md.h
> +++ b/drivers/md/md.h
> @@ -404,7 +404,8 @@ struct mddev {
> * are happening, so run/
> * takeover/stop are not safe
> */
> - struct gendisk *gendisk;
> + struct gendisk *gendisk; /* mdraid gendisk */
> + struct gendisk *dm_gendisk; /* dm-raid gendisk */
>
> struct kobject kobj;
> int hold_active;
> --
> 2.39.2
>
Looks good to me, reviewed-by: Xiao Ni <xni@redhat.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 1/5] block: cleanup and export bdev IO inflight APIs
2025-04-21 13:13 ` Yu Kuai
@ 2025-04-22 6:11 ` Christoph Hellwig
0 siblings, 0 replies; 18+ messages in thread
From: Christoph Hellwig @ 2025-04-22 6:11 UTC (permalink / raw)
To: Yu Kuai
Cc: Christoph Hellwig, axboe, xni, agk, snitzer, mpatocka, song, viro,
akpm, nadav.amit, ubizjak, cl, linux-block, linux-kernel,
dm-devel, linux-raid, yi.zhang, yangerkun, johnny.chenyi,
yukuai (C)
On Mon, Apr 21, 2025 at 09:13:57PM +0800, Yu Kuai wrote:
> > I'm not sure why this is needed or related, or even what additional
> > distinction is added here.
>
> Because for rq-based device, there are two different stage,
> blk_account_io_start() while allocating new rq, and
> blk_mq_start_request() while issuing the rq to driver.
>
> When will we think the reqeust is inflight? For iostat, my anser is the
> former one, because rq->start_time_ns is set here as well. And noted in
> iostats api diskstats_show(/proc/diskstats) and part_stat_show
> (/sys/block/sda/stat), inflight is get by part_in_flight, which is
> different from disk sysfs api(/sys/block/sda/inflight).
Trying to express this in a not very obvious function name isn't
going to work very well. Documenting your findings in comments is
much better.
> >
> > I'd just change this helper to call blk_mq_count_in_driver_rw for
> > blk-mq devices and remove the conditional from the sysfs code instead.
> > That gives us a much more robust and easier to understand API.
>
> Ok, and another separate patch, right?
Yes.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 3/5] md: add a new api sync_io_depth
2025-04-18 1:09 ` [PATCH v2 3/5] md: add a new api sync_io_depth Yu Kuai
@ 2025-04-22 6:15 ` Xiao Ni
0 siblings, 0 replies; 18+ messages in thread
From: Xiao Ni @ 2025-04-22 6:15 UTC (permalink / raw)
To: Yu Kuai
Cc: axboe, agk, snitzer, mpatocka, song, yukuai3, viro, akpm,
nadav.amit, ubizjak, cl, linux-block, linux-kernel, dm-devel,
linux-raid, yi.zhang, yangerkun, johnny.chenyi
On Fri, Apr 18, 2025 at 9:17 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> From: Yu Kuai <yukuai3@huawei.com>
>
> Currently if sync speed is above speed_min and below speed_max,
> md_do_sync() will wait for all sync IOs to be done before issuing new
> sync IO, means sync IO depth is limited to just 1.
>
> This limit is too low, in order to prevent sync speed drop conspicuously
> after fixing is_mddev_idle() in the next patch, add a new api for
> limiting sync IO depth, the default value is 32.
>
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
> drivers/md/md.c | 109 +++++++++++++++++++++++++++++++++++++++---------
> drivers/md/md.h | 1 +
> 2 files changed, 91 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 438e71e45c16..52cadfce7e8d 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -111,32 +111,48 @@ static void md_wakeup_thread_directly(struct md_thread __rcu *thread);
> /* Default safemode delay: 200 msec */
> #define DEFAULT_SAFEMODE_DELAY ((200 * HZ)/1000 +1)
> /*
> - * Current RAID-1,4,5 parallel reconstruction 'guaranteed speed limit'
> - * is 1000 KB/sec, so the extra system load does not show up that much.
> - * Increase it if you want to have more _guaranteed_ speed. Note that
> - * the RAID driver will use the maximum available bandwidth if the IO
> - * subsystem is idle. There is also an 'absolute maximum' reconstruction
> - * speed limit - in case reconstruction slows down your system despite
> - * idle IO detection.
> + * Current RAID-1,4,5,6,10 parallel reconstruction 'guaranteed speed limit'
> + * is sysctl_speed_limit_min, 1000 KB/sec by default, so the extra system load
> + * does not show up that much. Increase it if you want to have more guaranteed
> + * speed. Note that the RAID driver will use the maximum bandwidth
> + * sysctl_speed_limit_max, 200 MB/sec by default, if the IO subsystem is idle.
> *
> - * you can change it via /proc/sys/dev/raid/speed_limit_min and _max.
> - * or /sys/block/mdX/md/sync_speed_{min,max}
> + * Background sync IO speed control:
> + *
> + * - below speed min:
> + * no limit;
> + * - above speed min and below speed max:
> + * a) if mddev is idle, then no limit;
> + * b) if mddev is busy handling normal IO, then limit inflight sync IO
> + * to sync_io_depth;
> + * - above speed max:
> + * sync IO can't be issued;
> + *
> + * Following configurations can be changed via /proc/sys/dev/raid/ for system
> + * or /sys/block/mdX/md/ for one array.
> */
> -
> static int sysctl_speed_limit_min = 1000;
> static int sysctl_speed_limit_max = 200000;
> -static inline int speed_min(struct mddev *mddev)
> +static int sysctl_sync_io_depth = 32;
> +
> +static int speed_min(struct mddev *mddev)
> {
> return mddev->sync_speed_min ?
> mddev->sync_speed_min : sysctl_speed_limit_min;
> }
>
> -static inline int speed_max(struct mddev *mddev)
> +static int speed_max(struct mddev *mddev)
> {
> return mddev->sync_speed_max ?
> mddev->sync_speed_max : sysctl_speed_limit_max;
> }
>
> +static int sync_io_depth(struct mddev *mddev)
> +{
> + return mddev->sync_io_depth ?
> + mddev->sync_io_depth : sysctl_sync_io_depth;
> +}
> +
> static void rdev_uninit_serial(struct md_rdev *rdev)
> {
> if (!test_and_clear_bit(CollisionCheck, &rdev->flags))
> @@ -293,14 +309,21 @@ static const struct ctl_table raid_table[] = {
> .procname = "speed_limit_min",
> .data = &sysctl_speed_limit_min,
> .maxlen = sizeof(int),
> - .mode = S_IRUGO|S_IWUSR,
> + .mode = 0644,
> .proc_handler = proc_dointvec,
> },
> {
> .procname = "speed_limit_max",
> .data = &sysctl_speed_limit_max,
> .maxlen = sizeof(int),
> - .mode = S_IRUGO|S_IWUSR,
> + .mode = 0644,
> + .proc_handler = proc_dointvec,
> + },
> + {
> + .procname = "sync_io_depth",
> + .data = &sysctl_sync_io_depth,
> + .maxlen = sizeof(int),
> + .mode = 0644,
> .proc_handler = proc_dointvec,
> },
> };
> @@ -5091,7 +5114,7 @@ static ssize_t
> sync_min_show(struct mddev *mddev, char *page)
> {
> return sprintf(page, "%d (%s)\n", speed_min(mddev),
> - mddev->sync_speed_min ? "local": "system");
> + mddev->sync_speed_min ? "local" : "system");
> }
>
> static ssize_t
> @@ -5100,7 +5123,7 @@ sync_min_store(struct mddev *mddev, const char *buf, size_t len)
> unsigned int min;
> int rv;
>
> - if (strncmp(buf, "system", 6)==0) {
> + if (strncmp(buf, "system", 6) == 0) {
> min = 0;
> } else {
> rv = kstrtouint(buf, 10, &min);
> @@ -5120,7 +5143,7 @@ static ssize_t
> sync_max_show(struct mddev *mddev, char *page)
> {
> return sprintf(page, "%d (%s)\n", speed_max(mddev),
> - mddev->sync_speed_max ? "local": "system");
> + mddev->sync_speed_max ? "local" : "system");
> }
>
> static ssize_t
> @@ -5129,7 +5152,7 @@ sync_max_store(struct mddev *mddev, const char *buf, size_t len)
> unsigned int max;
> int rv;
>
> - if (strncmp(buf, "system", 6)==0) {
> + if (strncmp(buf, "system", 6) == 0) {
> max = 0;
> } else {
> rv = kstrtouint(buf, 10, &max);
> @@ -5145,6 +5168,35 @@ sync_max_store(struct mddev *mddev, const char *buf, size_t len)
> static struct md_sysfs_entry md_sync_max =
> __ATTR(sync_speed_max, S_IRUGO|S_IWUSR, sync_max_show, sync_max_store);
>
> +static ssize_t
> +sync_io_depth_show(struct mddev *mddev, char *page)
> +{
> + return sprintf(page, "%d (%s)\n", sync_io_depth(mddev),
> + mddev->sync_io_depth ? "local" : "system");
> +}
> +
> +static ssize_t
> +sync_io_depth_store(struct mddev *mddev, const char *buf, size_t len)
> +{
> + unsigned int max;
> + int rv;
> +
> + if (strncmp(buf, "system", 6) == 0) {
> + max = 0;
> + } else {
> + rv = kstrtouint(buf, 10, &max);
> + if (rv < 0)
> + return rv;
> + if (max == 0)
> + return -EINVAL;
> + }
> + mddev->sync_io_depth = max;
> + return len;
> +}
> +
> +static struct md_sysfs_entry md_sync_io_depth =
> +__ATTR_RW(sync_io_depth);
> +
> static ssize_t
> degraded_show(struct mddev *mddev, char *page)
> {
> @@ -5671,6 +5723,7 @@ static struct attribute *md_redundancy_attrs[] = {
> &md_mismatches.attr,
> &md_sync_min.attr,
> &md_sync_max.attr,
> + &md_sync_io_depth.attr,
> &md_sync_speed.attr,
> &md_sync_force_parallel.attr,
> &md_sync_completed.attr,
> @@ -8927,6 +8980,23 @@ static sector_t md_sync_position(struct mddev *mddev, enum sync_action action)
> }
> }
>
> +static bool sync_io_within_limit(struct mddev *mddev)
> +{
> + int io_sectors;
> +
> + /*
> + * For raid456, sync IO is stripe(4k) per IO, for other levels, it's
> + * RESYNC_PAGES(64k) per IO.
> + */
> + if (mddev->level == 4 || mddev->level == 5 || mddev->level == 6)
> + io_sectors = 8;
> + else
> + io_sectors = 128;
> +
> + return atomic_read(&mddev->recovery_active) <
> + io_sectors * sync_io_depth(mddev);
> +}
> +
> #define SYNC_MARKS 10
> #define SYNC_MARK_STEP (3*HZ)
> #define UPDATE_FREQUENCY (5*60*HZ)
> @@ -9195,7 +9265,8 @@ void md_do_sync(struct md_thread *thread)
> msleep(500);
> goto repeat;
> }
> - if (!is_mddev_idle(mddev, 0)) {
> + if (!sync_io_within_limit(mddev) &&
> + !is_mddev_idle(mddev, 0)) {
> /*
> * Give other IO more of a chance.
> * The faster the devices, the less we wait.
> diff --git a/drivers/md/md.h b/drivers/md/md.h
> index 9d55b4630077..b57842188f18 100644
> --- a/drivers/md/md.h
> +++ b/drivers/md/md.h
> @@ -484,6 +484,7 @@ struct mddev {
> /* if zero, use the system-wide default */
> int sync_speed_min;
> int sync_speed_max;
> + int sync_io_depth;
>
> /* resync even though the same disks are shared among md-devices */
> int parallel_resync;
> --
> 2.39.2
>
Looks good to me, reviewed-by: Xiao Ni <xni@redhat.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 4/5] md: fix is_mddev_idle()
2025-04-18 1:09 ` [PATCH v2 4/5] md: fix is_mddev_idle() Yu Kuai
2025-04-19 1:42 ` Su Yue
@ 2025-04-22 6:35 ` Xiao Ni
2025-04-27 1:37 ` Yu Kuai
1 sibling, 1 reply; 18+ messages in thread
From: Xiao Ni @ 2025-04-22 6:35 UTC (permalink / raw)
To: Yu Kuai
Cc: axboe, agk, snitzer, mpatocka, song, yukuai3, viro, akpm,
nadav.amit, ubizjak, cl, linux-block, linux-kernel, dm-devel,
linux-raid, yi.zhang, yangerkun, johnny.chenyi
On Fri, Apr 18, 2025 at 9:17 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> From: Yu Kuai <yukuai3@huawei.com>
>
> If sync_speed is above speed_min, then is_mddev_idle() will be called
> for each sync IO to check if the array is idle, and inflihgt sync_io
> will be limited if the array is not idle.
>
> However, while mkfs.ext4 for a large raid5 array while recovery is in
> progress, it's found that sync_speed is already above speed_min while
> lots of stripes are used for sync IO, causing long delay for mkfs.ext4.
>
> Root cause is the following checking from is_mddev_idle():
>
> t1: submit sync IO: events1 = completed IO - issued sync IO
> t2: submit next sync IO: events2 = completed IO - issued sync IO
> if (events2 - events1 > 64)
>
> For consequence, the more sync IO issued, the less likely checking will
> pass. And when completed normal IO is more than issued sync IO, the
> condition will finally pass and is_mddev_idle() will return false,
> however, last_events will be updated hence is_mddev_idle() can only
> return false once in a while.
>
> Fix this problem by changing the checking as following:
>
> 1) mddev doesn't have normal IO completed;
> 2) mddev doesn't have normal IO inflight;
> 3) if any member disks is partition, and all other partitions doesn't
> have IO completed.
>
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
> drivers/md/md.c | 84 +++++++++++++++++++++++++++----------------------
> drivers/md/md.h | 3 +-
> 2 files changed, 48 insertions(+), 39 deletions(-)
>
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 52cadfce7e8d..dfd85a5d6112 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -8625,50 +8625,58 @@ void md_cluster_stop(struct mddev *mddev)
> put_cluster_ops(mddev);
> }
>
> -static int is_mddev_idle(struct mddev *mddev, int init)
> +static bool is_rdev_holder_idle(struct md_rdev *rdev, bool init)
> {
> + unsigned long last_events = rdev->last_events;
> +
> + if (!bdev_is_partition(rdev->bdev))
> + return true;
> +
> + /*
> + * If rdev is partition, and user doesn't issue IO to the array, the
> + * array is still not idle if user issues IO to other partitions.
> + */
> + rdev->last_events = part_stat_read_accum(rdev->bdev->bd_disk->part0,
> + sectors) -
> + part_stat_read_accum(rdev->bdev, sectors);
> +
> + if (!init && rdev->last_events > last_events)
> + return false;
> +
> + return true;
> +}
> +
> +/*
> + * mddev is idle if following conditions are match since last check:
> + * 1) mddev doesn't have normal IO completed;
> + * 2) mddev doesn't have inflight normal IO;
> + * 3) if any member disk is partition, and other partitions doesn't have IO
> + * completed;
> + *
> + * Noted this checking rely on IO accounting is enabled.
> + */
> +static bool is_mddev_idle(struct mddev *mddev, int init)
> +{
> + unsigned long last_events = mddev->last_events;
> + struct gendisk *disk;
> struct md_rdev *rdev;
> - int idle;
> - int curr_events;
> + bool idle = true;
>
> - idle = 1;
> - rcu_read_lock();
> - rdev_for_each_rcu(rdev, mddev) {
> - struct gendisk *disk = rdev->bdev->bd_disk;
> + disk = mddev_is_dm(mddev) ? mddev->dm_gendisk : mddev->gendisk;
> + if (!disk)
> + return true;
>
> - if (!init && !blk_queue_io_stat(disk->queue))
> - continue;
> + mddev->last_events = part_stat_read_accum(disk->part0, sectors);
> + if (!init && (mddev->last_events > last_events ||
> + bdev_count_inflight(disk->part0)))
> + idle = false;
>
> - curr_events = (int)part_stat_read_accum(disk->part0, sectors) -
> - atomic_read(&disk->sync_io);
> - /* sync IO will cause sync_io to increase before the disk_stats
> - * as sync_io is counted when a request starts, and
> - * disk_stats is counted when it completes.
> - * So resync activity will cause curr_events to be smaller than
> - * when there was no such activity.
> - * non-sync IO will cause disk_stat to increase without
> - * increasing sync_io so curr_events will (eventually)
> - * be larger than it was before. Once it becomes
> - * substantially larger, the test below will cause
> - * the array to appear non-idle, and resync will slow
> - * down.
> - * If there is a lot of outstanding resync activity when
> - * we set last_event to curr_events, then all that activity
> - * completing might cause the array to appear non-idle
> - * and resync will be slowed down even though there might
> - * not have been non-resync activity. This will only
> - * happen once though. 'last_events' will soon reflect
> - * the state where there is little or no outstanding
> - * resync requests, and further resync activity will
> - * always make curr_events less than last_events.
> - *
> - */
> - if (init || curr_events - rdev->last_events > 64) {
> - rdev->last_events = curr_events;
> - idle = 0;
> - }
> - }
> + rcu_read_lock();
> + rdev_for_each_rcu(rdev, mddev)
> + if (!is_rdev_holder_idle(rdev, init))
> + idle = false;
> rcu_read_unlock();
> +
> return idle;
> }
>
> diff --git a/drivers/md/md.h b/drivers/md/md.h
> index b57842188f18..1d51c2405d3d 100644
> --- a/drivers/md/md.h
> +++ b/drivers/md/md.h
> @@ -132,7 +132,7 @@ struct md_rdev {
>
> sector_t sectors; /* Device size (in 512bytes sectors) */
> struct mddev *mddev; /* RAID array if running */
> - int last_events; /* IO event timestamp */
> + unsigned long last_events; /* IO event timestamp */
>
> /*
> * If meta_bdev is non-NULL, it means that a separate device is
> @@ -520,6 +520,7 @@ struct mddev {
> * adding a spare
> */
>
> + unsigned long last_events; /* IO event timestamp */
Can we use another name? Because mddev has events counter. This name
can easily be confused with that counter.
Regards
Xiao
> atomic_t recovery_active; /* blocks scheduled, but not written */
> wait_queue_head_t recovery_wait;
> sector_t recovery_cp;
> --
> 2.39.2
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 5/5] md: cleanup accounting for issued sync IO
2025-04-18 1:09 ` [PATCH v2 5/5] md: cleanup accounting for issued sync IO Yu Kuai
@ 2025-04-22 6:36 ` Xiao Ni
0 siblings, 0 replies; 18+ messages in thread
From: Xiao Ni @ 2025-04-22 6:36 UTC (permalink / raw)
To: Yu Kuai
Cc: axboe, agk, snitzer, mpatocka, song, yukuai3, viro, akpm,
nadav.amit, ubizjak, cl, linux-block, linux-kernel, dm-devel,
linux-raid, yi.zhang, yangerkun, johnny.chenyi
On Fri, Apr 18, 2025 at 9:17 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> From: Yu Kuai <yukuai3@huawei.com>
>
> It's no longer used and can be removed, also remove the field
> 'gendisk->sync_io'.
>
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
> drivers/md/md.h | 11 -----------
> drivers/md/raid1.c | 3 ---
> drivers/md/raid10.c | 9 ---------
> drivers/md/raid5.c | 8 --------
> include/linux/blkdev.h | 1 -
> 5 files changed, 32 deletions(-)
>
> diff --git a/drivers/md/md.h b/drivers/md/md.h
> index 1d51c2405d3d..07b40a55ed5f 100644
> --- a/drivers/md/md.h
> +++ b/drivers/md/md.h
> @@ -717,17 +717,6 @@ static inline int mddev_trylock(struct mddev *mddev)
> }
> extern void mddev_unlock(struct mddev *mddev);
>
> -static inline void md_sync_acct(struct block_device *bdev, unsigned long nr_sectors)
> -{
> - if (blk_queue_io_stat(bdev->bd_disk->queue))
> - atomic_add(nr_sectors, &bdev->bd_disk->sync_io);
> -}
> -
> -static inline void md_sync_acct_bio(struct bio *bio, unsigned long nr_sectors)
> -{
> - md_sync_acct(bio->bi_bdev, nr_sectors);
> -}
> -
> struct md_personality
> {
> struct md_submodule_head head;
> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> index de9bccbe7337..657d481525be 100644
> --- a/drivers/md/raid1.c
> +++ b/drivers/md/raid1.c
> @@ -2382,7 +2382,6 @@ static void sync_request_write(struct mddev *mddev, struct r1bio *r1_bio)
>
> wbio->bi_end_io = end_sync_write;
> atomic_inc(&r1_bio->remaining);
> - md_sync_acct(conf->mirrors[i].rdev->bdev, bio_sectors(wbio));
>
> submit_bio_noacct(wbio);
> }
> @@ -3055,7 +3054,6 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
> bio = r1_bio->bios[i];
> if (bio->bi_end_io == end_sync_read) {
> read_targets--;
> - md_sync_acct_bio(bio, nr_sectors);
> if (read_targets == 1)
> bio->bi_opf &= ~MD_FAILFAST;
> submit_bio_noacct(bio);
> @@ -3064,7 +3062,6 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
> } else {
> atomic_set(&r1_bio->remaining, 1);
> bio = r1_bio->bios[r1_bio->read_disk];
> - md_sync_acct_bio(bio, nr_sectors);
> if (read_targets == 1)
> bio->bi_opf &= ~MD_FAILFAST;
> submit_bio_noacct(bio);
> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> index ba32bac975b8..dce06bf65016 100644
> --- a/drivers/md/raid10.c
> +++ b/drivers/md/raid10.c
> @@ -2426,7 +2426,6 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio)
>
> atomic_inc(&conf->mirrors[d].rdev->nr_pending);
> atomic_inc(&r10_bio->remaining);
> - md_sync_acct(conf->mirrors[d].rdev->bdev, bio_sectors(tbio));
>
> if (test_bit(FailFast, &conf->mirrors[d].rdev->flags))
> tbio->bi_opf |= MD_FAILFAST;
> @@ -2448,8 +2447,6 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio)
> bio_copy_data(tbio, fbio);
> d = r10_bio->devs[i].devnum;
> atomic_inc(&r10_bio->remaining);
> - md_sync_acct(conf->mirrors[d].replacement->bdev,
> - bio_sectors(tbio));
> submit_bio_noacct(tbio);
> }
>
> @@ -2583,13 +2580,10 @@ static void recovery_request_write(struct mddev *mddev, struct r10bio *r10_bio)
> d = r10_bio->devs[1].devnum;
> if (wbio->bi_end_io) {
> atomic_inc(&conf->mirrors[d].rdev->nr_pending);
> - md_sync_acct(conf->mirrors[d].rdev->bdev, bio_sectors(wbio));
> submit_bio_noacct(wbio);
> }
> if (wbio2) {
> atomic_inc(&conf->mirrors[d].replacement->nr_pending);
> - md_sync_acct(conf->mirrors[d].replacement->bdev,
> - bio_sectors(wbio2));
> submit_bio_noacct(wbio2);
> }
> }
> @@ -3757,7 +3751,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
> r10_bio->sectors = nr_sectors;
>
> if (bio->bi_end_io == end_sync_read) {
> - md_sync_acct_bio(bio, nr_sectors);
> bio->bi_status = 0;
> submit_bio_noacct(bio);
> }
> @@ -4880,7 +4873,6 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr,
> r10_bio->sectors = nr_sectors;
>
> /* Now submit the read */
> - md_sync_acct_bio(read_bio, r10_bio->sectors);
> atomic_inc(&r10_bio->remaining);
> read_bio->bi_next = NULL;
> submit_bio_noacct(read_bio);
> @@ -4940,7 +4932,6 @@ static void reshape_request_write(struct mddev *mddev, struct r10bio *r10_bio)
> continue;
>
> atomic_inc(&rdev->nr_pending);
> - md_sync_acct_bio(b, r10_bio->sectors);
> atomic_inc(&r10_bio->remaining);
> b->bi_next = NULL;
> submit_bio_noacct(b);
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 6389383166c0..ca5b0e8ba707 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -1240,10 +1240,6 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
> }
>
> if (rdev) {
> - if (s->syncing || s->expanding || s->expanded
> - || s->replacing)
> - md_sync_acct(rdev->bdev, RAID5_STRIPE_SECTORS(conf));
> -
> set_bit(STRIPE_IO_STARTED, &sh->state);
>
> bio_init(bi, rdev->bdev, &dev->vec, 1, op | op_flags);
> @@ -1300,10 +1296,6 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
> submit_bio_noacct(bi);
> }
> if (rrdev) {
> - if (s->syncing || s->expanding || s->expanded
> - || s->replacing)
> - md_sync_acct(rrdev->bdev, RAID5_STRIPE_SECTORS(conf));
> -
> set_bit(STRIPE_IO_STARTED, &sh->state);
>
> bio_init(rbi, rrdev->bdev, &dev->rvec, 1, op | op_flags);
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index e39c45bc0a97..f3a625b00734 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -182,7 +182,6 @@ struct gendisk {
> struct list_head slave_bdevs;
> #endif
> struct timer_rand_state *random;
> - atomic_t sync_io; /* RAID */
> struct disk_events *ev;
>
> #ifdef CONFIG_BLK_DEV_ZONED
> --
> 2.39.2
>
Looks good to me. Reviewed-by: Xiao Ni <xni@redhat.com>
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 4/5] md: fix is_mddev_idle()
2025-04-22 6:35 ` Xiao Ni
@ 2025-04-27 1:37 ` Yu Kuai
2025-04-27 2:45 ` Xiao Ni
0 siblings, 1 reply; 18+ messages in thread
From: Yu Kuai @ 2025-04-27 1:37 UTC (permalink / raw)
To: Xiao Ni, Yu Kuai
Cc: axboe, agk, snitzer, mpatocka, song, viro, akpm, nadav.amit,
ubizjak, cl, linux-block, linux-kernel, dm-devel, linux-raid,
yi.zhang, yangerkun, johnny.chenyi, yukuai (C)
Hi,
在 2025/04/22 14:35, Xiao Ni 写道:
>> + unsigned long last_events; /* IO event timestamp */
> Can we use another name? Because mddev has events counter. This name
> can easily be confused with that counter.
Sorry for the late reply.
Sure, how about, normal_IO_events?
Thanks,
Kuai
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 4/5] md: fix is_mddev_idle()
2025-04-27 1:37 ` Yu Kuai
@ 2025-04-27 2:45 ` Xiao Ni
0 siblings, 0 replies; 18+ messages in thread
From: Xiao Ni @ 2025-04-27 2:45 UTC (permalink / raw)
To: Yu Kuai
Cc: axboe, agk, snitzer, mpatocka, song, viro, akpm, nadav.amit,
ubizjak, cl, linux-block, linux-kernel, dm-devel, linux-raid,
yi.zhang, yangerkun, johnny.chenyi, yukuai (C)
On Sun, Apr 27, 2025 at 9:37 AM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> Hi,
>
> 在 2025/04/22 14:35, Xiao Ni 写道:
> >> + unsigned long last_events; /* IO event timestamp */
> > Can we use another name? Because mddev has events counter. This name
> > can easily be confused with that counter.
>
> Sorry for the late reply.
>
> Sure, how about, normal_IO_events?
It looks much better, thanks for this. I guess it's a typo error about
uppercase IO, right? We usually use lowercase.
Best Regards
Xiao
>
> Thanks,
> Kuai
>
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2025-04-27 2:45 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-18 1:09 [PATCH v2 0/5] md: fix is_mddev_idle() Yu Kuai
2025-04-18 1:09 ` [PATCH v2 1/5] block: cleanup and export bdev IO inflight APIs Yu Kuai
2025-04-21 11:59 ` Christoph Hellwig
2025-04-21 13:13 ` Yu Kuai
2025-04-22 6:11 ` Christoph Hellwig
2025-04-18 1:09 ` [PATCH v2 2/5] md: record dm-raid gendisk in mddev Yu Kuai
2025-04-22 6:00 ` Xiao Ni
2025-04-18 1:09 ` [PATCH v2 3/5] md: add a new api sync_io_depth Yu Kuai
2025-04-22 6:15 ` Xiao Ni
2025-04-18 1:09 ` [PATCH v2 4/5] md: fix is_mddev_idle() Yu Kuai
2025-04-19 1:42 ` Su Yue
2025-04-19 2:00 ` Yu Kuai
2025-04-19 5:03 ` Su Yue
2025-04-22 6:35 ` Xiao Ni
2025-04-27 1:37 ` Yu Kuai
2025-04-27 2:45 ` Xiao Ni
2025-04-18 1:09 ` [PATCH v2 5/5] md: cleanup accounting for issued sync IO Yu Kuai
2025-04-22 6:36 ` Xiao Ni
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).